Test Report: Docker_Linux_crio_arm64 21808

                    
                      530458d3ecd77092debe1aca48846101c1a78c03:2025-11-02:42171
                    
                

Test fail (37/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.72
35 TestAddons/parallel/Registry 15.43
36 TestAddons/parallel/RegistryCreds 0.51
37 TestAddons/parallel/Ingress 145.82
38 TestAddons/parallel/InspektorGadget 5.33
39 TestAddons/parallel/MetricsServer 5.37
41 TestAddons/parallel/CSI 54.95
42 TestAddons/parallel/Headlamp 3.17
43 TestAddons/parallel/CloudSpanner 6.28
44 TestAddons/parallel/LocalPath 8.4
45 TestAddons/parallel/NvidiaDevicePlugin 6.29
46 TestAddons/parallel/Yakd 5.29
97 TestFunctional/parallel/ServiceCmdConnect 603.8
125 TestFunctional/parallel/ServiceCmd/DeployApp 600.99
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.53
135 TestFunctional/parallel/ServiceCmd/Format 0.47
136 TestFunctional/parallel/ServiceCmd/URL 0.53
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.33
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.55
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.19
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.33
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.24
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.39
191 TestJSONOutput/pause/Command 2.38
197 TestJSONOutput/unpause/Command 1.69
250 TestScheduledStopUnix 40.99
292 TestPause/serial/Pause 7.87
296 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.52
303 TestStartStop/group/old-k8s-version/serial/Pause 6.8
307 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.4
316 TestStartStop/group/no-preload/serial/Pause 6.62
320 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 3.16
325 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.67
332 TestStartStop/group/embed-certs/serial/Pause 8.84
338 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.46
343 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.89
348 TestStartStop/group/newest-cni/serial/Pause 7.67
x
+
TestAddons/serial/Volcano (0.72s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-230560 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-230560 addons disable volcano --alsologtostderr -v=1: exit status 11 (716.842824ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 13:15:58.424235  301990 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:15:58.425067  301990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:15:58.425109  301990 out.go:374] Setting ErrFile to fd 2...
	I1102 13:15:58.425136  301990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:15:58.425419  301990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 13:15:58.425750  301990 mustload.go:66] Loading cluster: addons-230560
	I1102 13:15:58.426165  301990 config.go:182] Loaded profile config "addons-230560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:15:58.426210  301990 addons.go:607] checking whether the cluster is paused
	I1102 13:15:58.426342  301990 config.go:182] Loaded profile config "addons-230560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:15:58.426378  301990 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:15:58.426909  301990 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:15:58.449480  301990 ssh_runner.go:195] Run: systemctl --version
	I1102 13:15:58.449529  301990 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:15:58.467932  301990 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:15:58.577684  301990 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:15:58.577797  301990 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:15:58.610388  301990 cri.go:89] found id: "59d3d49e880a7e09fb4d9be850df44733d6d5116185f5d62db1b5c126b574e0b"
	I1102 13:15:58.610464  301990 cri.go:89] found id: "495997c964878856129fa01c98380dbe27e0d3e3399d552d965363043c0ed285"
	I1102 13:15:58.610483  301990 cri.go:89] found id: "92ee410347c1fecfc99fa6c734d7ea23c7a537dc02964ee119f8cc717fcef3e2"
	I1102 13:15:58.610516  301990 cri.go:89] found id: "14fe9bc0e4e3fca54005005e2faed708854fa4e45837404cf9bd640d6b5e2de6"
	I1102 13:15:58.610543  301990 cri.go:89] found id: "849382b87b03aa7df7b3bd0d7677466f19027eeb542e35e25286f1e8249c940e"
	I1102 13:15:58.610572  301990 cri.go:89] found id: "de7641522a90557a5bf20f6e7fc608045762d4951eef39028dd344fa1ec0e246"
	I1102 13:15:58.610595  301990 cri.go:89] found id: "ce226e80e176fd107a1fd4e99d0423900d376d659984557fa242d51fe29175f6"
	I1102 13:15:58.610650  301990 cri.go:89] found id: "43495555e2c69ab9b146d21dd528f268dcc6b5277bef46a2cdd8aac98ed01981"
	I1102 13:15:58.610675  301990 cri.go:89] found id: "23d26c5efd413a919fa01dc11c652b236e497eb2943a1a1cfaf21109a227fdf8"
	I1102 13:15:58.610699  301990 cri.go:89] found id: "f4000d22ba555b95620554ea649b6b0e65ff2c8de55597628a09a4936558b721"
	I1102 13:15:58.610711  301990 cri.go:89] found id: "571d698a41a0bf933525b4655374feb95afed1edb2640617ab7511cce65f0776"
	I1102 13:15:58.610715  301990 cri.go:89] found id: "b05b32f995002607af838c0a5ffed270958eaf8c7f841b88122803f35d8d2015"
	I1102 13:15:58.610718  301990 cri.go:89] found id: "ece119ee391be38c1a4f223d48708f601e4910a7734c54cbe59f4c38812974b5"
	I1102 13:15:58.610721  301990 cri.go:89] found id: "7d130b18d8ef12edee3e0d7b593a71e0c4b5690b982edfbbf83860e1b5d40c73"
	I1102 13:15:58.610724  301990 cri.go:89] found id: "01cc86f91cc933f1117d93925d4304fd9b0729b04f70bdfda8a3027baef7c8e9"
	I1102 13:15:58.610729  301990 cri.go:89] found id: "7c311915f4fbc67516c8e9c0534f2b294964f9597b308ef3f1372ad8d0e1b2d5"
	I1102 13:15:58.610732  301990 cri.go:89] found id: "2d7e91ed3fc10a735909e92c3d70b5422345ba649e0f465bf27dbb923af7877c"
	I1102 13:15:58.610736  301990 cri.go:89] found id: "b8f72f36b8b681e6188a6ae20fbb9399b5a1bba3a9e3fa05f0101b5f7bd14aac"
	I1102 13:15:58.610739  301990 cri.go:89] found id: "7c3129e8902e2ba546ec94fea95b907a80a88b9f19819ccc547d8e7cd7ddae43"
	I1102 13:15:58.610742  301990 cri.go:89] found id: "ba2b8cd401ace9132335713de0f6619fc89d02ed1a60281902f918001c3a9bc6"
	I1102 13:15:58.610748  301990 cri.go:89] found id: "ae6a81713fca42870850a9a5e0a86e40858cbf49ccdf8f4b701bb7c58d5b250d"
	I1102 13:15:58.610751  301990 cri.go:89] found id: "e520da42d44eee8e7e351ea85bd1e8a1fec19b3c33ded4f2a1188baef7b927e3"
	I1102 13:15:58.610755  301990 cri.go:89] found id: "47bfba99e6f299e3b3448bc8864faaedc77b8f94e548ef086dc4f5981ae0360a"
	I1102 13:15:58.610758  301990 cri.go:89] found id: ""
	I1102 13:15:58.610816  301990 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 13:15:58.626450  301990 out.go:203] 
	W1102 13:15:58.629530  301990 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:15:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:15:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 13:15:58.629566  301990 out.go:285] * 
	* 
	W1102 13:15:59.043890  301990 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 13:15:59.046951  301990 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-230560 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.72s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.29561ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-qlm8d" [d973110c-93dd-4878-bcf2-c23a761ada84] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003164814s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-gk6xb" [bd4d6b0b-09b4-4d00-8a1f-01347f478af8] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003960865s
addons_test.go:392: (dbg) Run:  kubectl --context addons-230560 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-230560 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-230560 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.909761649s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-230560 ip
2025/11/02 13:16:23 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-230560 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-230560 addons disable registry --alsologtostderr -v=1: exit status 11 (269.720789ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 13:16:23.533592  302930 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:16:23.534375  302930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:16:23.534391  302930 out.go:374] Setting ErrFile to fd 2...
	I1102 13:16:23.534398  302930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:16:23.534735  302930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 13:16:23.535128  302930 mustload.go:66] Loading cluster: addons-230560
	I1102 13:16:23.535547  302930 config.go:182] Loaded profile config "addons-230560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:16:23.535567  302930 addons.go:607] checking whether the cluster is paused
	I1102 13:16:23.535743  302930 config.go:182] Loaded profile config "addons-230560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:16:23.535761  302930 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:16:23.536302  302930 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:16:23.553516  302930 ssh_runner.go:195] Run: systemctl --version
	I1102 13:16:23.553583  302930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:16:23.571400  302930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:16:23.678243  302930 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:16:23.678347  302930 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:16:23.714452  302930 cri.go:89] found id: "59d3d49e880a7e09fb4d9be850df44733d6d5116185f5d62db1b5c126b574e0b"
	I1102 13:16:23.714475  302930 cri.go:89] found id: "495997c964878856129fa01c98380dbe27e0d3e3399d552d965363043c0ed285"
	I1102 13:16:23.714481  302930 cri.go:89] found id: "92ee410347c1fecfc99fa6c734d7ea23c7a537dc02964ee119f8cc717fcef3e2"
	I1102 13:16:23.714485  302930 cri.go:89] found id: "14fe9bc0e4e3fca54005005e2faed708854fa4e45837404cf9bd640d6b5e2de6"
	I1102 13:16:23.714488  302930 cri.go:89] found id: "849382b87b03aa7df7b3bd0d7677466f19027eeb542e35e25286f1e8249c940e"
	I1102 13:16:23.714492  302930 cri.go:89] found id: "de7641522a90557a5bf20f6e7fc608045762d4951eef39028dd344fa1ec0e246"
	I1102 13:16:23.714495  302930 cri.go:89] found id: "ce226e80e176fd107a1fd4e99d0423900d376d659984557fa242d51fe29175f6"
	I1102 13:16:23.714500  302930 cri.go:89] found id: "43495555e2c69ab9b146d21dd528f268dcc6b5277bef46a2cdd8aac98ed01981"
	I1102 13:16:23.714503  302930 cri.go:89] found id: "23d26c5efd413a919fa01dc11c652b236e497eb2943a1a1cfaf21109a227fdf8"
	I1102 13:16:23.714515  302930 cri.go:89] found id: "f4000d22ba555b95620554ea649b6b0e65ff2c8de55597628a09a4936558b721"
	I1102 13:16:23.714521  302930 cri.go:89] found id: "571d698a41a0bf933525b4655374feb95afed1edb2640617ab7511cce65f0776"
	I1102 13:16:23.714524  302930 cri.go:89] found id: "b05b32f995002607af838c0a5ffed270958eaf8c7f841b88122803f35d8d2015"
	I1102 13:16:23.714527  302930 cri.go:89] found id: "ece119ee391be38c1a4f223d48708f601e4910a7734c54cbe59f4c38812974b5"
	I1102 13:16:23.714531  302930 cri.go:89] found id: "7d130b18d8ef12edee3e0d7b593a71e0c4b5690b982edfbbf83860e1b5d40c73"
	I1102 13:16:23.714534  302930 cri.go:89] found id: "01cc86f91cc933f1117d93925d4304fd9b0729b04f70bdfda8a3027baef7c8e9"
	I1102 13:16:23.714544  302930 cri.go:89] found id: "7c311915f4fbc67516c8e9c0534f2b294964f9597b308ef3f1372ad8d0e1b2d5"
	I1102 13:16:23.714552  302930 cri.go:89] found id: "2d7e91ed3fc10a735909e92c3d70b5422345ba649e0f465bf27dbb923af7877c"
	I1102 13:16:23.714558  302930 cri.go:89] found id: "b8f72f36b8b681e6188a6ae20fbb9399b5a1bba3a9e3fa05f0101b5f7bd14aac"
	I1102 13:16:23.714561  302930 cri.go:89] found id: "7c3129e8902e2ba546ec94fea95b907a80a88b9f19819ccc547d8e7cd7ddae43"
	I1102 13:16:23.714565  302930 cri.go:89] found id: "ba2b8cd401ace9132335713de0f6619fc89d02ed1a60281902f918001c3a9bc6"
	I1102 13:16:23.714569  302930 cri.go:89] found id: "ae6a81713fca42870850a9a5e0a86e40858cbf49ccdf8f4b701bb7c58d5b250d"
	I1102 13:16:23.714579  302930 cri.go:89] found id: "e520da42d44eee8e7e351ea85bd1e8a1fec19b3c33ded4f2a1188baef7b927e3"
	I1102 13:16:23.714582  302930 cri.go:89] found id: "47bfba99e6f299e3b3448bc8864faaedc77b8f94e548ef086dc4f5981ae0360a"
	I1102 13:16:23.714585  302930 cri.go:89] found id: ""
	I1102 13:16:23.714677  302930 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 13:16:23.730121  302930 out.go:203] 
	W1102 13:16:23.733065  302930 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:16:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:16:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 13:16:23.733090  302930 out.go:285] * 
	* 
	W1102 13:16:23.739848  302930 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 13:16:23.742889  302930 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-230560 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.43s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.51s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.817364ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-230560
addons_test.go:332: (dbg) Run:  kubectl --context addons-230560 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-230560 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-230560 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (267.229874ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 13:17:06.740577  304112 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:17:06.741588  304112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:17:06.741627  304112 out.go:374] Setting ErrFile to fd 2...
	I1102 13:17:06.741646  304112 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:17:06.741972  304112 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 13:17:06.742362  304112 mustload.go:66] Loading cluster: addons-230560
	I1102 13:17:06.742851  304112 config.go:182] Loaded profile config "addons-230560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:17:06.742895  304112 addons.go:607] checking whether the cluster is paused
	I1102 13:17:06.743048  304112 config.go:182] Loaded profile config "addons-230560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:17:06.743080  304112 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:17:06.743582  304112 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:17:06.760960  304112 ssh_runner.go:195] Run: systemctl --version
	I1102 13:17:06.761021  304112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:17:06.779349  304112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:17:06.885401  304112 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:17:06.885486  304112 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:17:06.917457  304112 cri.go:89] found id: "59d3d49e880a7e09fb4d9be850df44733d6d5116185f5d62db1b5c126b574e0b"
	I1102 13:17:06.917483  304112 cri.go:89] found id: "495997c964878856129fa01c98380dbe27e0d3e3399d552d965363043c0ed285"
	I1102 13:17:06.917498  304112 cri.go:89] found id: "92ee410347c1fecfc99fa6c734d7ea23c7a537dc02964ee119f8cc717fcef3e2"
	I1102 13:17:06.917503  304112 cri.go:89] found id: "14fe9bc0e4e3fca54005005e2faed708854fa4e45837404cf9bd640d6b5e2de6"
	I1102 13:17:06.917506  304112 cri.go:89] found id: "849382b87b03aa7df7b3bd0d7677466f19027eeb542e35e25286f1e8249c940e"
	I1102 13:17:06.917509  304112 cri.go:89] found id: "de7641522a90557a5bf20f6e7fc608045762d4951eef39028dd344fa1ec0e246"
	I1102 13:17:06.917512  304112 cri.go:89] found id: "ce226e80e176fd107a1fd4e99d0423900d376d659984557fa242d51fe29175f6"
	I1102 13:17:06.917515  304112 cri.go:89] found id: "43495555e2c69ab9b146d21dd528f268dcc6b5277bef46a2cdd8aac98ed01981"
	I1102 13:17:06.917518  304112 cri.go:89] found id: "23d26c5efd413a919fa01dc11c652b236e497eb2943a1a1cfaf21109a227fdf8"
	I1102 13:17:06.917525  304112 cri.go:89] found id: "f4000d22ba555b95620554ea649b6b0e65ff2c8de55597628a09a4936558b721"
	I1102 13:17:06.917529  304112 cri.go:89] found id: "571d698a41a0bf933525b4655374feb95afed1edb2640617ab7511cce65f0776"
	I1102 13:17:06.917532  304112 cri.go:89] found id: "b05b32f995002607af838c0a5ffed270958eaf8c7f841b88122803f35d8d2015"
	I1102 13:17:06.917536  304112 cri.go:89] found id: "ece119ee391be38c1a4f223d48708f601e4910a7734c54cbe59f4c38812974b5"
	I1102 13:17:06.917539  304112 cri.go:89] found id: "7d130b18d8ef12edee3e0d7b593a71e0c4b5690b982edfbbf83860e1b5d40c73"
	I1102 13:17:06.917542  304112 cri.go:89] found id: "01cc86f91cc933f1117d93925d4304fd9b0729b04f70bdfda8a3027baef7c8e9"
	I1102 13:17:06.917548  304112 cri.go:89] found id: "7c311915f4fbc67516c8e9c0534f2b294964f9597b308ef3f1372ad8d0e1b2d5"
	I1102 13:17:06.917557  304112 cri.go:89] found id: "2d7e91ed3fc10a735909e92c3d70b5422345ba649e0f465bf27dbb923af7877c"
	I1102 13:17:06.917562  304112 cri.go:89] found id: "b8f72f36b8b681e6188a6ae20fbb9399b5a1bba3a9e3fa05f0101b5f7bd14aac"
	I1102 13:17:06.917572  304112 cri.go:89] found id: "7c3129e8902e2ba546ec94fea95b907a80a88b9f19819ccc547d8e7cd7ddae43"
	I1102 13:17:06.917577  304112 cri.go:89] found id: "ba2b8cd401ace9132335713de0f6619fc89d02ed1a60281902f918001c3a9bc6"
	I1102 13:17:06.917583  304112 cri.go:89] found id: "ae6a81713fca42870850a9a5e0a86e40858cbf49ccdf8f4b701bb7c58d5b250d"
	I1102 13:17:06.917589  304112 cri.go:89] found id: "e520da42d44eee8e7e351ea85bd1e8a1fec19b3c33ded4f2a1188baef7b927e3"
	I1102 13:17:06.917593  304112 cri.go:89] found id: "47bfba99e6f299e3b3448bc8864faaedc77b8f94e548ef086dc4f5981ae0360a"
	I1102 13:17:06.917595  304112 cri.go:89] found id: ""
	I1102 13:17:06.917663  304112 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 13:17:06.933717  304112 out.go:203] 
	W1102 13:17:06.936754  304112 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:17:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:17:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 13:17:06.936784  304112 out.go:285] * 
	* 
	W1102 13:17:06.943338  304112 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 13:17:06.946320  304112 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-230560 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.51s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (145.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-230560 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-230560 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-230560 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [261a2638-4b42-44ed-a4f9-e7c482806dd6] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [261a2638-4b42-44ed-a4f9-e7c482806dd6] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.007377143s
I1102 13:16:45.127768  295174 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-230560 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-230560 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.765741782s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-230560 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-230560 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-230560
helpers_test.go:243: (dbg) docker inspect addons-230560:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6c103036ac5bf712bbe6e5c6b8c4fb3f5a69f6a2461bc077906d5c7d591f5293",
	        "Created": "2025-11-02T13:13:28.928338812Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 296345,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-02T13:13:28.996874591Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/6c103036ac5bf712bbe6e5c6b8c4fb3f5a69f6a2461bc077906d5c7d591f5293/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6c103036ac5bf712bbe6e5c6b8c4fb3f5a69f6a2461bc077906d5c7d591f5293/hostname",
	        "HostsPath": "/var/lib/docker/containers/6c103036ac5bf712bbe6e5c6b8c4fb3f5a69f6a2461bc077906d5c7d591f5293/hosts",
	        "LogPath": "/var/lib/docker/containers/6c103036ac5bf712bbe6e5c6b8c4fb3f5a69f6a2461bc077906d5c7d591f5293/6c103036ac5bf712bbe6e5c6b8c4fb3f5a69f6a2461bc077906d5c7d591f5293-json.log",
	        "Name": "/addons-230560",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-230560:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-230560",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6c103036ac5bf712bbe6e5c6b8c4fb3f5a69f6a2461bc077906d5c7d591f5293",
	                "LowerDir": "/var/lib/docker/overlay2/3f0d7197467fa981a71ed5b5652af8516181f2a9adc7a743f5cf92585166f8e4-init/diff:/var/lib/docker/overlay2/53058afe6acd7391f639f551e07f446f6a39a991b699aea18507cb21a1652e97/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3f0d7197467fa981a71ed5b5652af8516181f2a9adc7a743f5cf92585166f8e4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3f0d7197467fa981a71ed5b5652af8516181f2a9adc7a743f5cf92585166f8e4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3f0d7197467fa981a71ed5b5652af8516181f2a9adc7a743f5cf92585166f8e4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-230560",
	                "Source": "/var/lib/docker/volumes/addons-230560/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-230560",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-230560",
	                "name.minikube.sigs.k8s.io": "addons-230560",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2a3120f6f61a03308bdece2c803b2ed8cddf73c7699d02b8b285cae1810ef36c",
	            "SandboxKey": "/var/run/docker/netns/2a3120f6f61a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-230560": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0e:27:00:b7:cb:c9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b844984c8592730a180c752ce42b56b370efb9795cb13a0939b690ade86b755c",
	                    "EndpointID": "982e9f0f7d1378569e6087f70aa7f56b7168809b3a8086307577d4d5af627830",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-230560",
	                        "6c103036ac5b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-230560 -n addons-230560
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-230560 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-230560 logs -n 25: (1.577444641s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-513487                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-513487 │ jenkins │ v1.37.0 │ 02 Nov 25 13:13 UTC │ 02 Nov 25 13:13 UTC │
	│ start   │ --download-only -p binary-mirror-605864 --alsologtostderr --binary-mirror http://127.0.0.1:39709 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-605864   │ jenkins │ v1.37.0 │ 02 Nov 25 13:13 UTC │                     │
	│ delete  │ -p binary-mirror-605864                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-605864   │ jenkins │ v1.37.0 │ 02 Nov 25 13:13 UTC │ 02 Nov 25 13:13 UTC │
	│ addons  │ disable dashboard -p addons-230560                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-230560          │ jenkins │ v1.37.0 │ 02 Nov 25 13:13 UTC │                     │
	│ addons  │ enable dashboard -p addons-230560                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-230560          │ jenkins │ v1.37.0 │ 02 Nov 25 13:13 UTC │                     │
	│ start   │ -p addons-230560 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-230560          │ jenkins │ v1.37.0 │ 02 Nov 25 13:13 UTC │ 02 Nov 25 13:15 UTC │
	│ addons  │ addons-230560 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-230560          │ jenkins │ v1.37.0 │ 02 Nov 25 13:15 UTC │                     │
	│ addons  │ addons-230560 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-230560          │ jenkins │ v1.37.0 │ 02 Nov 25 13:16 UTC │                     │
	│ addons  │ enable headlamp -p addons-230560 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-230560          │ jenkins │ v1.37.0 │ 02 Nov 25 13:16 UTC │                     │
	│ addons  │ addons-230560 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-230560          │ jenkins │ v1.37.0 │ 02 Nov 25 13:16 UTC │                     │
	│ ip      │ addons-230560 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-230560          │ jenkins │ v1.37.0 │ 02 Nov 25 13:16 UTC │ 02 Nov 25 13:16 UTC │
	│ addons  │ addons-230560 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-230560          │ jenkins │ v1.37.0 │ 02 Nov 25 13:16 UTC │                     │
	│ addons  │ addons-230560 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-230560          │ jenkins │ v1.37.0 │ 02 Nov 25 13:16 UTC │                     │
	│ addons  │ addons-230560 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-230560          │ jenkins │ v1.37.0 │ 02 Nov 25 13:16 UTC │                     │
	│ ssh     │ addons-230560 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-230560          │ jenkins │ v1.37.0 │ 02 Nov 25 13:16 UTC │                     │
	│ addons  │ addons-230560 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-230560          │ jenkins │ v1.37.0 │ 02 Nov 25 13:17 UTC │                     │
	│ addons  │ addons-230560 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-230560          │ jenkins │ v1.37.0 │ 02 Nov 25 13:17 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-230560                                                                                                                                                                                                                                                                                                                                                                                           │ addons-230560          │ jenkins │ v1.37.0 │ 02 Nov 25 13:17 UTC │ 02 Nov 25 13:17 UTC │
	│ addons  │ addons-230560 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-230560          │ jenkins │ v1.37.0 │ 02 Nov 25 13:17 UTC │                     │
	│ addons  │ addons-230560 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-230560          │ jenkins │ v1.37.0 │ 02 Nov 25 13:17 UTC │                     │
	│ addons  │ addons-230560 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-230560          │ jenkins │ v1.37.0 │ 02 Nov 25 13:17 UTC │                     │
	│ ssh     │ addons-230560 ssh cat /opt/local-path-provisioner/pvc-1b5bd828-581e-49b1-bd61-f61335a71fd0_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-230560          │ jenkins │ v1.37.0 │ 02 Nov 25 13:17 UTC │ 02 Nov 25 13:17 UTC │
	│ addons  │ addons-230560 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-230560          │ jenkins │ v1.37.0 │ 02 Nov 25 13:17 UTC │                     │
	│ addons  │ addons-230560 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-230560          │ jenkins │ v1.37.0 │ 02 Nov 25 13:17 UTC │                     │
	│ ip      │ addons-230560 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-230560          │ jenkins │ v1.37.0 │ 02 Nov 25 13:18 UTC │ 02 Nov 25 13:18 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 13:13:04
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 13:13:04.994668  295944 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:13:04.994800  295944 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:13:04.994812  295944 out.go:374] Setting ErrFile to fd 2...
	I1102 13:13:04.994817  295944 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:13:04.995051  295944 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 13:13:04.995504  295944 out.go:368] Setting JSON to false
	I1102 13:13:04.996329  295944 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6937,"bootTime":1762082248,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1102 13:13:04.996395  295944 start.go:143] virtualization:  
	I1102 13:13:04.999928  295944 out.go:179] * [addons-230560] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1102 13:13:05.004092  295944 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 13:13:05.004239  295944 notify.go:221] Checking for updates...
	I1102 13:13:05.010504  295944 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 13:13:05.013701  295944 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-293314/kubeconfig
	I1102 13:13:05.016585  295944 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-293314/.minikube
	I1102 13:13:05.019439  295944 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1102 13:13:05.022411  295944 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 13:13:05.025577  295944 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 13:13:05.050223  295944 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1102 13:13:05.050349  295944 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:13:05.121797  295944 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-02 13:13:05.111475408 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 13:13:05.121925  295944 docker.go:319] overlay module found
	I1102 13:13:05.125028  295944 out.go:179] * Using the docker driver based on user configuration
	I1102 13:13:05.128090  295944 start.go:309] selected driver: docker
	I1102 13:13:05.128110  295944 start.go:930] validating driver "docker" against <nil>
	I1102 13:13:05.128124  295944 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 13:13:05.128853  295944 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:13:05.183169  295944 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-02 13:13:05.174148004 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 13:13:05.183319  295944 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1102 13:13:05.183553  295944 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 13:13:05.186377  295944 out.go:179] * Using Docker driver with root privileges
	I1102 13:13:05.189165  295944 cni.go:84] Creating CNI manager for ""
	I1102 13:13:05.189242  295944 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:13:05.189262  295944 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1102 13:13:05.189355  295944 start.go:353] cluster config:
	{Name:addons-230560 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-230560 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1102 13:13:05.192432  295944 out.go:179] * Starting "addons-230560" primary control-plane node in "addons-230560" cluster
	I1102 13:13:05.195271  295944 cache.go:124] Beginning downloading kic base image for docker with crio
	I1102 13:13:05.198264  295944 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1102 13:13:05.201177  295944 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:13:05.201238  295944 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1102 13:13:05.201252  295944 cache.go:59] Caching tarball of preloaded images
	I1102 13:13:05.201257  295944 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1102 13:13:05.201348  295944 preload.go:233] Found /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1102 13:13:05.201359  295944 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1102 13:13:05.201721  295944 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/config.json ...
	I1102 13:13:05.201752  295944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/config.json: {Name:mk912b3941452a6f2be80f1ba9594fe174cc5a86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:13:05.216560  295944 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1102 13:13:05.216686  295944 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1102 13:13:05.216704  295944 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1102 13:13:05.216709  295944 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1102 13:13:05.216717  295944 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1102 13:13:05.216722  295944 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1102 13:13:22.977326  295944 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1102 13:13:22.977389  295944 cache.go:233] Successfully downloaded all kic artifacts
	I1102 13:13:22.977420  295944 start.go:360] acquireMachinesLock for addons-230560: {Name:mkc4332b46cf87e7f10ba6c63852797379fabd0b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 13:13:22.977553  295944 start.go:364] duration metric: took 114.627µs to acquireMachinesLock for "addons-230560"
	I1102 13:13:22.977580  295944 start.go:93] Provisioning new machine with config: &{Name:addons-230560 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-230560 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 13:13:22.977648  295944 start.go:125] createHost starting for "" (driver="docker")
	I1102 13:13:22.981134  295944 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1102 13:13:22.981388  295944 start.go:159] libmachine.API.Create for "addons-230560" (driver="docker")
	I1102 13:13:22.981421  295944 client.go:173] LocalClient.Create starting
	I1102 13:13:22.981542  295944 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem
	I1102 13:13:23.174058  295944 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem
	I1102 13:13:23.328791  295944 cli_runner.go:164] Run: docker network inspect addons-230560 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1102 13:13:23.343906  295944 cli_runner.go:211] docker network inspect addons-230560 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1102 13:13:23.343998  295944 network_create.go:284] running [docker network inspect addons-230560] to gather additional debugging logs...
	I1102 13:13:23.344019  295944 cli_runner.go:164] Run: docker network inspect addons-230560
	W1102 13:13:23.358727  295944 cli_runner.go:211] docker network inspect addons-230560 returned with exit code 1
	I1102 13:13:23.358759  295944 network_create.go:287] error running [docker network inspect addons-230560]: docker network inspect addons-230560: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-230560 not found
	I1102 13:13:23.358773  295944 network_create.go:289] output of [docker network inspect addons-230560]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-230560 not found
	
	** /stderr **
	I1102 13:13:23.358862  295944 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 13:13:23.374177  295944 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a8e830}
	I1102 13:13:23.374216  295944 network_create.go:124] attempt to create docker network addons-230560 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1102 13:13:23.374277  295944 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-230560 addons-230560
	I1102 13:13:23.434220  295944 network_create.go:108] docker network addons-230560 192.168.49.0/24 created
	I1102 13:13:23.434255  295944 kic.go:121] calculated static IP "192.168.49.2" for the "addons-230560" container
	I1102 13:13:23.434340  295944 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1102 13:13:23.449334  295944 cli_runner.go:164] Run: docker volume create addons-230560 --label name.minikube.sigs.k8s.io=addons-230560 --label created_by.minikube.sigs.k8s.io=true
	I1102 13:13:23.466746  295944 oci.go:103] Successfully created a docker volume addons-230560
	I1102 13:13:23.466839  295944 cli_runner.go:164] Run: docker run --rm --name addons-230560-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-230560 --entrypoint /usr/bin/test -v addons-230560:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1102 13:13:24.468094  295944 cli_runner.go:217] Completed: docker run --rm --name addons-230560-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-230560 --entrypoint /usr/bin/test -v addons-230560:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (1.001202318s)
	I1102 13:13:24.468125  295944 oci.go:107] Successfully prepared a docker volume addons-230560
	I1102 13:13:24.468154  295944 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:13:24.468172  295944 kic.go:194] Starting extracting preloaded images to volume ...
	I1102 13:13:24.468248  295944 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-230560:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1102 13:13:28.862281  295944 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-230560:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.393993212s)
	I1102 13:13:28.862314  295944 kic.go:203] duration metric: took 4.394137869s to extract preloaded images to volume ...
	W1102 13:13:28.862453  295944 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1102 13:13:28.862575  295944 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1102 13:13:28.913968  295944 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-230560 --name addons-230560 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-230560 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-230560 --network addons-230560 --ip 192.168.49.2 --volume addons-230560:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1102 13:13:29.220826  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Running}}
	I1102 13:13:29.245865  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:13:29.268976  295944 cli_runner.go:164] Run: docker exec addons-230560 stat /var/lib/dpkg/alternatives/iptables
	I1102 13:13:29.315687  295944 oci.go:144] the created container "addons-230560" has a running status.
	I1102 13:13:29.315724  295944 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa...
	I1102 13:13:29.446843  295944 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1102 13:13:29.471784  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:13:29.489150  295944 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1102 13:13:29.489172  295944 kic_runner.go:114] Args: [docker exec --privileged addons-230560 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1102 13:13:29.540422  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:13:29.560267  295944 machine.go:94] provisionDockerMachine start ...
	I1102 13:13:29.560356  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:13:29.590961  295944 main.go:143] libmachine: Using SSH client type: native
	I1102 13:13:29.591288  295944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1102 13:13:29.591305  295944 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 13:13:29.591873  295944 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33282->127.0.0.1:33138: read: connection reset by peer
	I1102 13:13:32.742115  295944 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-230560
	
	I1102 13:13:32.742141  295944 ubuntu.go:182] provisioning hostname "addons-230560"
	I1102 13:13:32.742214  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:13:32.758833  295944 main.go:143] libmachine: Using SSH client type: native
	I1102 13:13:32.759148  295944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1102 13:13:32.759166  295944 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-230560 && echo "addons-230560" | sudo tee /etc/hostname
	I1102 13:13:32.917210  295944 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-230560
	
	I1102 13:13:32.917384  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:13:32.936123  295944 main.go:143] libmachine: Using SSH client type: native
	I1102 13:13:32.936441  295944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1102 13:13:32.936463  295944 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-230560' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-230560/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-230560' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 13:13:33.090803  295944 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 13:13:33.090832  295944 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-293314/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-293314/.minikube}
	I1102 13:13:33.090853  295944 ubuntu.go:190] setting up certificates
	I1102 13:13:33.090875  295944 provision.go:84] configureAuth start
	I1102 13:13:33.090949  295944 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-230560
	I1102 13:13:33.107593  295944 provision.go:143] copyHostCerts
	I1102 13:13:33.107676  295944 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem (1675 bytes)
	I1102 13:13:33.107804  295944 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem (1082 bytes)
	I1102 13:13:33.107865  295944 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem (1123 bytes)
	I1102 13:13:33.107929  295944 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem org=jenkins.addons-230560 san=[127.0.0.1 192.168.49.2 addons-230560 localhost minikube]
	I1102 13:13:33.391526  295944 provision.go:177] copyRemoteCerts
	I1102 13:13:33.391593  295944 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 13:13:33.391632  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:13:33.410512  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:13:33.514221  295944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1102 13:13:33.531672  295944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1102 13:13:33.548781  295944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1102 13:13:33.565499  295944 provision.go:87] duration metric: took 474.586198ms to configureAuth
	I1102 13:13:33.565524  295944 ubuntu.go:206] setting minikube options for container-runtime
	I1102 13:13:33.565711  295944 config.go:182] Loaded profile config "addons-230560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:13:33.565811  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:13:33.583385  295944 main.go:143] libmachine: Using SSH client type: native
	I1102 13:13:33.583694  295944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1102 13:13:33.583713  295944 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1102 13:13:33.845120  295944 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1102 13:13:33.845190  295944 machine.go:97] duration metric: took 4.284900425s to provisionDockerMachine
	I1102 13:13:33.845218  295944 client.go:176] duration metric: took 10.863790237s to LocalClient.Create
	I1102 13:13:33.845251  295944 start.go:167] duration metric: took 10.863863321s to libmachine.API.Create "addons-230560"
	I1102 13:13:33.845278  295944 start.go:293] postStartSetup for "addons-230560" (driver="docker")
	I1102 13:13:33.845304  295944 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1102 13:13:33.845391  295944 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1102 13:13:33.845497  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:13:33.862342  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:13:33.966637  295944 ssh_runner.go:195] Run: cat /etc/os-release
	I1102 13:13:33.969828  295944 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1102 13:13:33.969858  295944 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1102 13:13:33.969869  295944 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-293314/.minikube/addons for local assets ...
	I1102 13:13:33.969969  295944 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-293314/.minikube/files for local assets ...
	I1102 13:13:33.969996  295944 start.go:296] duration metric: took 124.696523ms for postStartSetup
	I1102 13:13:33.970307  295944 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-230560
	I1102 13:13:33.986123  295944 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/config.json ...
	I1102 13:13:33.986424  295944 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 13:13:33.986466  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:13:34.004630  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:13:34.107685  295944 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1102 13:13:34.112465  295944 start.go:128] duration metric: took 11.134801307s to createHost
	I1102 13:13:34.112494  295944 start.go:83] releasing machines lock for "addons-230560", held for 11.13492683s
	I1102 13:13:34.112589  295944 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-230560
	I1102 13:13:34.128830  295944 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem (1679 bytes)
	I1102 13:13:34.128884  295944 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem (1082 bytes)
	I1102 13:13:34.128910  295944 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:13:34.128949  295944 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem (1675 bytes)
	W1102 13:13:34.129037  295944 start.go:789] pre-probe CA setup failed: create ca cert file asset for /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt: stat: stat /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt: no such file or directory
	I1102 13:13:34.129115  295944 ssh_runner.go:195] Run: cat /version.json
	I1102 13:13:34.129160  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:13:34.129421  295944 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1102 13:13:34.129478  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:13:34.145098  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:13:34.167541  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:13:34.254090  295944 ssh_runner.go:195] Run: systemctl --version
	I1102 13:13:34.346548  295944 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1102 13:13:34.380753  295944 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1102 13:13:34.384815  295944 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1102 13:13:34.384938  295944 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1102 13:13:34.412754  295944 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1102 13:13:34.412778  295944 start.go:496] detecting cgroup driver to use...
	I1102 13:13:34.412810  295944 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1102 13:13:34.412867  295944 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1102 13:13:34.429005  295944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1102 13:13:34.440843  295944 docker.go:218] disabling cri-docker service (if available) ...
	I1102 13:13:34.440910  295944 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1102 13:13:34.458126  295944 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1102 13:13:34.476225  295944 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1102 13:13:34.596806  295944 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1102 13:13:34.720114  295944 docker.go:234] disabling docker service ...
	I1102 13:13:34.720204  295944 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1102 13:13:34.744584  295944 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1102 13:13:34.757268  295944 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1102 13:13:34.874787  295944 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1102 13:13:34.998813  295944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1102 13:13:35.019975  295944 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1102 13:13:35.033559  295944 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1102 13:13:35.033625  295944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:13:35.042339  295944 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1102 13:13:35.042415  295944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:13:35.050879  295944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:13:35.059576  295944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:13:35.068360  295944 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1102 13:13:35.076556  295944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:13:35.085665  295944 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:13:35.099681  295944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:13:35.108756  295944 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1102 13:13:35.116650  295944 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1102 13:13:35.124174  295944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:13:35.230675  295944 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1102 13:13:35.360254  295944 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1102 13:13:35.360340  295944 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1102 13:13:35.364031  295944 start.go:564] Will wait 60s for crictl version
	I1102 13:13:35.364097  295944 ssh_runner.go:195] Run: which crictl
	I1102 13:13:35.367385  295944 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1102 13:13:35.392140  295944 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1102 13:13:35.392244  295944 ssh_runner.go:195] Run: crio --version
	I1102 13:13:35.419872  295944 ssh_runner.go:195] Run: crio --version
	I1102 13:13:35.449945  295944 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1102 13:13:35.452627  295944 cli_runner.go:164] Run: docker network inspect addons-230560 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 13:13:35.469847  295944 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1102 13:13:35.473587  295944 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:13:35.483169  295944 kubeadm.go:884] updating cluster {Name:addons-230560 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-230560 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1102 13:13:35.483290  295944 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:13:35.483351  295944 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:13:35.515968  295944 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:13:35.515992  295944 crio.go:433] Images already preloaded, skipping extraction
	I1102 13:13:35.516049  295944 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:13:35.543741  295944 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:13:35.543767  295944 cache_images.go:86] Images are preloaded, skipping loading
	I1102 13:13:35.543777  295944 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1102 13:13:35.543870  295944 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-230560 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-230560 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1102 13:13:35.543955  295944 ssh_runner.go:195] Run: crio config
	I1102 13:13:35.616095  295944 cni.go:84] Creating CNI manager for ""
	I1102 13:13:35.616119  295944 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:13:35.616140  295944 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1102 13:13:35.616163  295944 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-230560 NodeName:addons-230560 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1102 13:13:35.616298  295944 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-230560"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1102 13:13:35.616376  295944 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1102 13:13:35.624113  295944 binaries.go:44] Found k8s binaries, skipping transfer
	I1102 13:13:35.624181  295944 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1102 13:13:35.631707  295944 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1102 13:13:35.644511  295944 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1102 13:13:35.656980  295944 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1102 13:13:35.669711  295944 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1102 13:13:35.673203  295944 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:13:35.682448  295944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:13:35.794739  295944 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:13:35.810927  295944 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560 for IP: 192.168.49.2
	I1102 13:13:35.810998  295944 certs.go:195] generating shared ca certs ...
	I1102 13:13:35.811030  295944 certs.go:227] acquiring lock for ca certs: {Name:mkead50075949a3cdc798f9c0149a2bc2638cbbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:13:35.811831  295944 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-293314/.minikube/ca.key
	I1102 13:13:36.302297  295944 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt ...
	I1102 13:13:36.302329  295944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt: {Name:mk0d00e414dd47c53e0a467755fbe9f3980454d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:13:36.303138  295944 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-293314/.minikube/ca.key ...
	I1102 13:13:36.303154  295944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/ca.key: {Name:mkfd2b8395ee35f9350e3eb5214162e5e8ec773f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:13:36.303827  295944 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.key
	I1102 13:13:36.467951  295944 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.crt ...
	I1102 13:13:36.467981  295944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.crt: {Name:mkb1e8e00419a95387f597a1df78db401414322e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:13:36.468719  295944 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.key ...
	I1102 13:13:36.468734  295944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.key: {Name:mk6483d473af4e26656971a0d05bcdeb600fd13c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:13:36.468824  295944 certs.go:257] generating profile certs ...
	I1102 13:13:36.468891  295944 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.key
	I1102 13:13:36.468909  295944 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.crt with IP's: []
	I1102 13:13:36.676648  295944 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.crt ...
	I1102 13:13:36.676679  295944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.crt: {Name:mk3d43e5de14e342a7ed167171d2e94a335649bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:13:36.676856  295944 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.key ...
	I1102 13:13:36.676871  295944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.key: {Name:mkdecd6c3a73fc89f7738ff7ba550cc6319ca8c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:13:36.676961  295944 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/apiserver.key.1945bd50
	I1102 13:13:36.676985  295944 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/apiserver.crt.1945bd50 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1102 13:13:37.320546  295944 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/apiserver.crt.1945bd50 ...
	I1102 13:13:37.320578  295944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/apiserver.crt.1945bd50: {Name:mk46fb68b77ed9febc9dee296e4cdde2a2d9e1ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:13:37.321360  295944 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/apiserver.key.1945bd50 ...
	I1102 13:13:37.321378  295944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/apiserver.key.1945bd50: {Name:mk472b46b24a1410d8cf5c6f3b23bd7f6805963f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:13:37.322003  295944 certs.go:382] copying /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/apiserver.crt.1945bd50 -> /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/apiserver.crt
	I1102 13:13:37.322087  295944 certs.go:386] copying /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/apiserver.key.1945bd50 -> /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/apiserver.key
	I1102 13:13:37.322140  295944 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/proxy-client.key
	I1102 13:13:37.322160  295944 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/proxy-client.crt with IP's: []
	I1102 13:13:38.129585  295944 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/proxy-client.crt ...
	I1102 13:13:38.129619  295944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/proxy-client.crt: {Name:mkd4964bdc679505cad906bc56605d7643702dd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:13:38.130357  295944 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/proxy-client.key ...
	I1102 13:13:38.130374  295944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/proxy-client.key: {Name:mka8422cea7376f25883ba8d04e90808483d4653 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:13:38.131119  295944 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem (1679 bytes)
	I1102 13:13:38.131162  295944 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem (1082 bytes)
	I1102 13:13:38.131187  295944 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:13:38.131219  295944 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem (1675 bytes)
	I1102 13:13:38.131746  295944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1102 13:13:38.154430  295944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1102 13:13:38.175747  295944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1102 13:13:38.200072  295944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1102 13:13:38.222361  295944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1102 13:13:38.240117  295944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1102 13:13:38.257547  295944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1102 13:13:38.275039  295944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1102 13:13:38.292828  295944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:13:38.311234  295944 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1102 13:13:38.324546  295944 ssh_runner.go:195] Run: openssl version
	I1102 13:13:38.330750  295944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:13:38.339292  295944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:13:38.343329  295944 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 13:13 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:13:38.343454  295944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:13:38.384438  295944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:13:38.392990  295944 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1102 13:13:38.396273  295944 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1102 13:13:38.396337  295944 kubeadm.go:401] StartCluster: {Name:addons-230560 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-230560 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:13:38.396411  295944 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:13:38.396471  295944 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:13:38.422511  295944 cri.go:89] found id: ""
	I1102 13:13:38.422586  295944 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1102 13:13:38.430052  295944 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1102 13:13:38.437500  295944 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1102 13:13:38.437594  295944 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1102 13:13:38.444993  295944 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1102 13:13:38.445015  295944 kubeadm.go:158] found existing configuration files:
	
	I1102 13:13:38.445093  295944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1102 13:13:38.452605  295944 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1102 13:13:38.452668  295944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1102 13:13:38.459835  295944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1102 13:13:38.467014  295944 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1102 13:13:38.467081  295944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1102 13:13:38.474123  295944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1102 13:13:38.481654  295944 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1102 13:13:38.481736  295944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1102 13:13:38.489174  295944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1102 13:13:38.496984  295944 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1102 13:13:38.497080  295944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1102 13:13:38.504683  295944 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1102 13:13:38.547902  295944 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1102 13:13:38.547967  295944 kubeadm.go:319] [preflight] Running pre-flight checks
	I1102 13:13:38.571747  295944 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1102 13:13:38.571824  295944 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1102 13:13:38.571867  295944 kubeadm.go:319] OS: Linux
	I1102 13:13:38.571919  295944 kubeadm.go:319] CGROUPS_CPU: enabled
	I1102 13:13:38.571973  295944 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1102 13:13:38.572026  295944 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1102 13:13:38.572080  295944 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1102 13:13:38.572133  295944 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1102 13:13:38.572187  295944 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1102 13:13:38.572240  295944 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1102 13:13:38.572293  295944 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1102 13:13:38.572345  295944 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1102 13:13:38.638348  295944 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1102 13:13:38.638469  295944 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1102 13:13:38.638570  295944 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1102 13:13:38.647092  295944 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1102 13:13:38.653730  295944 out.go:252]   - Generating certificates and keys ...
	I1102 13:13:38.653874  295944 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1102 13:13:38.653989  295944 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1102 13:13:39.087844  295944 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1102 13:13:39.281256  295944 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1102 13:13:39.790026  295944 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1102 13:13:40.065100  295944 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1102 13:13:40.702493  295944 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1102 13:13:40.702932  295944 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-230560 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1102 13:13:41.791862  295944 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1102 13:13:41.792254  295944 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-230560 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1102 13:13:42.983132  295944 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1102 13:13:44.577956  295944 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1102 13:13:44.723439  295944 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1102 13:13:44.723752  295944 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1102 13:13:45.171398  295944 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1102 13:13:46.043822  295944 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1102 13:13:46.975763  295944 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1102 13:13:47.036840  295944 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1102 13:13:47.316624  295944 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1102 13:13:47.317395  295944 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1102 13:13:47.320283  295944 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1102 13:13:47.323623  295944 out.go:252]   - Booting up control plane ...
	I1102 13:13:47.323742  295944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1102 13:13:47.323831  295944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1102 13:13:47.323906  295944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1102 13:13:47.340940  295944 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1102 13:13:47.341083  295944 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1102 13:13:47.349642  295944 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1102 13:13:47.349971  295944 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1102 13:13:47.350184  295944 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1102 13:13:47.497720  295944 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1102 13:13:47.497881  295944 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1102 13:13:49.498980  295944 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001792587s
	I1102 13:13:49.504680  295944 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1102 13:13:49.504778  295944 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1102 13:13:49.505021  295944 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1102 13:13:49.505110  295944 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1102 13:13:54.348077  295944 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.842489481s
	I1102 13:13:55.506886  295944 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001582873s
	I1102 13:13:56.636873  295944 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 7.131873756s
	I1102 13:13:56.661934  295944 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1102 13:13:56.676830  295944 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1102 13:13:56.696276  295944 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1102 13:13:56.696489  295944 kubeadm.go:319] [mark-control-plane] Marking the node addons-230560 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1102 13:13:56.708887  295944 kubeadm.go:319] [bootstrap-token] Using token: m4zpig.5x6anpocnxlq70ej
	I1102 13:13:56.711804  295944 out.go:252]   - Configuring RBAC rules ...
	I1102 13:13:56.711932  295944 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1102 13:13:56.722897  295944 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1102 13:13:56.733094  295944 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1102 13:13:56.737436  295944 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1102 13:13:56.741577  295944 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1102 13:13:56.745522  295944 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1102 13:13:57.044108  295944 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1102 13:13:57.484387  295944 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1102 13:13:58.044653  295944 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1102 13:13:58.045919  295944 kubeadm.go:319] 
	I1102 13:13:58.046013  295944 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1102 13:13:58.046026  295944 kubeadm.go:319] 
	I1102 13:13:58.046108  295944 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1102 13:13:58.046120  295944 kubeadm.go:319] 
	I1102 13:13:58.046147  295944 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1102 13:13:58.046209  295944 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1102 13:13:58.046262  295944 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1102 13:13:58.046266  295944 kubeadm.go:319] 
	I1102 13:13:58.046324  295944 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1102 13:13:58.046328  295944 kubeadm.go:319] 
	I1102 13:13:58.046378  295944 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1102 13:13:58.046384  295944 kubeadm.go:319] 
	I1102 13:13:58.046439  295944 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1102 13:13:58.046517  295944 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1102 13:13:58.046589  295944 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1102 13:13:58.046594  295944 kubeadm.go:319] 
	I1102 13:13:58.046711  295944 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1102 13:13:58.046794  295944 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1102 13:13:58.046798  295944 kubeadm.go:319] 
	I1102 13:13:58.046886  295944 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token m4zpig.5x6anpocnxlq70ej \
	I1102 13:13:58.046994  295944 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:bd4a1f3bddc85f3fc83315ad33165a30aa1cba7ce55898ef9dad8dcc7e8d0eec \
	I1102 13:13:58.047015  295944 kubeadm.go:319] 	--control-plane 
	I1102 13:13:58.047020  295944 kubeadm.go:319] 
	I1102 13:13:58.047108  295944 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1102 13:13:58.047113  295944 kubeadm.go:319] 
	I1102 13:13:58.047198  295944 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token m4zpig.5x6anpocnxlq70ej \
	I1102 13:13:58.047304  295944 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:bd4a1f3bddc85f3fc83315ad33165a30aa1cba7ce55898ef9dad8dcc7e8d0eec 
	I1102 13:13:58.049834  295944 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1102 13:13:58.050072  295944 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1102 13:13:58.050185  295944 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1102 13:13:58.050204  295944 cni.go:84] Creating CNI manager for ""
	I1102 13:13:58.050212  295944 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:13:58.053304  295944 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1102 13:13:58.056329  295944 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1102 13:13:58.060887  295944 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1102 13:13:58.060946  295944 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1102 13:13:58.076297  295944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1102 13:13:58.381908  295944 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1102 13:13:58.382054  295944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:13:58.382133  295944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-230560 minikube.k8s.io/updated_at=2025_11_02T13_13_58_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a minikube.k8s.io/name=addons-230560 minikube.k8s.io/primary=true
	I1102 13:13:58.559349  295944 ops.go:34] apiserver oom_adj: -16
	I1102 13:13:58.597875  295944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:13:59.098694  295944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:13:59.598940  295944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:14:00.098268  295944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:14:00.598819  295944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:14:01.098012  295944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:14:01.598596  295944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:14:02.098875  295944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:14:02.246667  295944 kubeadm.go:1114] duration metric: took 3.864657971s to wait for elevateKubeSystemPrivileges
	I1102 13:14:02.246698  295944 kubeadm.go:403] duration metric: took 23.850364818s to StartCluster
	I1102 13:14:02.246716  295944 settings.go:142] acquiring lock: {Name:mk95f66b3b15e63f58f8c9085c1ffe67cc396dc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:14:02.247376  295944 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-293314/kubeconfig
	I1102 13:14:02.247778  295944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/kubeconfig: {Name:mke5a65554da8fc0fd6a2ea60bed899d5b38ce09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:14:02.248604  295944 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 13:14:02.248746  295944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1102 13:14:02.249014  295944 config.go:182] Loaded profile config "addons-230560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:14:02.249053  295944 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1102 13:14:02.249134  295944 addons.go:70] Setting yakd=true in profile "addons-230560"
	I1102 13:14:02.249152  295944 addons.go:239] Setting addon yakd=true in "addons-230560"
	I1102 13:14:02.249174  295944 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:14:02.249616  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:14:02.249788  295944 addons.go:70] Setting inspektor-gadget=true in profile "addons-230560"
	I1102 13:14:02.249840  295944 addons.go:239] Setting addon inspektor-gadget=true in "addons-230560"
	I1102 13:14:02.249877  295944 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:14:02.250137  295944 addons.go:70] Setting metrics-server=true in profile "addons-230560"
	I1102 13:14:02.250157  295944 addons.go:239] Setting addon metrics-server=true in "addons-230560"
	I1102 13:14:02.250174  295944 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:14:02.250543  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:14:02.250985  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:14:02.254187  295944 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-230560"
	I1102 13:14:02.254277  295944 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-230560"
	I1102 13:14:02.254453  295944 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:14:02.255623  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:14:02.258255  295944 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-230560"
	I1102 13:14:02.258281  295944 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-230560"
	I1102 13:14:02.258610  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:14:02.267433  295944 addons.go:70] Setting registry=true in profile "addons-230560"
	I1102 13:14:02.267470  295944 addons.go:239] Setting addon registry=true in "addons-230560"
	I1102 13:14:02.267511  295944 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:14:02.278700  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:14:02.254303  295944 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-230560"
	I1102 13:14:02.289216  295944 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-230560"
	I1102 13:14:02.289268  295944 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:14:02.289743  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:14:02.294690  295944 addons.go:70] Setting registry-creds=true in profile "addons-230560"
	I1102 13:14:02.294736  295944 addons.go:239] Setting addon registry-creds=true in "addons-230560"
	I1102 13:14:02.294779  295944 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:14:02.295248  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:14:02.254308  295944 addons.go:70] Setting cloud-spanner=true in profile "addons-230560"
	I1102 13:14:02.296712  295944 addons.go:239] Setting addon cloud-spanner=true in "addons-230560"
	I1102 13:14:02.254314  295944 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-230560"
	I1102 13:14:02.296785  295944 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-230560"
	I1102 13:14:02.296819  295944 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:14:02.297337  295944 addons.go:70] Setting storage-provisioner=true in profile "addons-230560"
	I1102 13:14:02.297361  295944 addons.go:239] Setting addon storage-provisioner=true in "addons-230560"
	I1102 13:14:02.297383  295944 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:14:02.297912  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:14:02.254335  295944 addons.go:70] Setting default-storageclass=true in profile "addons-230560"
	I1102 13:14:02.302902  295944 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-230560"
	I1102 13:14:02.303361  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:14:02.254339  295944 addons.go:70] Setting gcp-auth=true in profile "addons-230560"
	I1102 13:14:02.322839  295944 mustload.go:66] Loading cluster: addons-230560
	I1102 13:14:02.323111  295944 config.go:182] Loaded profile config "addons-230560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:14:02.323424  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:14:02.254342  295944 addons.go:70] Setting ingress=true in profile "addons-230560"
	I1102 13:14:02.254346  295944 addons.go:70] Setting ingress-dns=true in profile "addons-230560"
	I1102 13:14:02.329063  295944 addons.go:239] Setting addon ingress-dns=true in "addons-230560"
	I1102 13:14:02.254386  295944 out.go:179] * Verifying Kubernetes components...
	I1102 13:14:02.329210  295944 addons.go:70] Setting volcano=true in profile "addons-230560"
	I1102 13:14:02.329228  295944 addons.go:239] Setting addon volcano=true in "addons-230560"
	I1102 13:14:02.329246  295944 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:14:02.329707  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:14:02.360533  295944 addons.go:70] Setting volumesnapshots=true in profile "addons-230560"
	I1102 13:14:02.360568  295944 addons.go:239] Setting addon volumesnapshots=true in "addons-230560"
	I1102 13:14:02.360602  295944 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:14:02.361106  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:14:02.367692  295944 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:14:02.368304  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:14:02.397838  295944 addons.go:239] Setting addon ingress=true in "addons-230560"
	I1102 13:14:02.397956  295944 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:14:02.398450  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:14:02.446216  295944 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1102 13:14:02.456640  295944 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-230560"
	I1102 13:14:02.456737  295944 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:14:02.457235  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:14:02.470806  295944 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1102 13:14:02.470875  295944 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1102 13:14:02.470964  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:14:02.410238  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:14:02.487252  295944 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1102 13:14:02.497911  295944 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1102 13:14:02.418221  295944 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:14:02.499294  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:14:02.499608  295944 out.go:179]   - Using image docker.io/registry:3.0.0
	I1102 13:14:02.418302  295944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:14:02.498295  295944 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 13:14:02.498299  295944 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1102 13:14:02.498321  295944 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
	I1102 13:14:02.498554  295944 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1102 13:14:02.522926  295944 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1102 13:14:02.522995  295944 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1102 13:14:02.523083  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:14:02.542122  295944 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:14:02.542208  295944 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1102 13:14:02.542312  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:14:02.553882  295944 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1102 13:14:02.554072  295944 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1102 13:14:02.554086  295944 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1102 13:14:02.554167  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:14:02.569487  295944 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1102 13:14:02.569584  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	W1102 13:14:02.570889  295944 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1102 13:14:02.571113  295944 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:14:02.571129  295944 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1102 13:14:02.571171  295944 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1102 13:14:02.575421  295944 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1102 13:14:02.575516  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:14:02.586097  295944 addons.go:239] Setting addon default-storageclass=true in "addons-230560"
	I1102 13:14:02.586138  295944 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:14:02.586579  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:14:02.608189  295944 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1102 13:14:02.608211  295944 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1102 13:14:02.608340  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:14:02.627014  295944 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1102 13:14:02.627035  295944 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1102 13:14:02.627097  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:14:02.646969  295944 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1102 13:14:02.649925  295944 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1102 13:14:02.649952  295944 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1102 13:14:02.650022  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:14:02.660084  295944 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1102 13:14:02.660517  295944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1102 13:14:02.685131  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:14:02.691319  295944 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1102 13:14:02.694849  295944 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1102 13:14:02.694872  295944 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1102 13:14:02.694942  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:14:02.722885  295944 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1102 13:14:02.728399  295944 out.go:179]   - Using image docker.io/busybox:stable
	I1102 13:14:02.734701  295944 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1102 13:14:02.734833  295944 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1102 13:14:02.734843  295944 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1102 13:14:02.734925  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:14:02.743200  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:14:02.755978  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:14:02.766395  295944 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1102 13:14:02.772806  295944 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1102 13:14:02.772881  295944 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1102 13:14:02.772994  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:14:02.795120  295944 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1102 13:14:02.795236  295944 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1102 13:14:02.798890  295944 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1102 13:14:02.799041  295944 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1102 13:14:02.799054  295944 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1102 13:14:02.799115  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:14:02.804629  295944 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1102 13:14:02.809397  295944 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1102 13:14:02.812272  295944 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1102 13:14:02.815154  295944 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1102 13:14:02.820189  295944 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1102 13:14:02.823052  295944 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1102 13:14:02.825729  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:14:02.830975  295944 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1102 13:14:02.830996  295944 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1102 13:14:02.831073  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:14:02.845830  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:14:02.849568  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:14:02.871291  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:14:02.882790  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:14:02.888999  295944 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1102 13:14:02.889019  295944 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1102 13:14:02.889080  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:14:02.890812  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:14:02.922754  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:14:02.949927  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:14:02.957796  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:14:02.960006  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:14:02.964055  295944 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:14:02.971377  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	W1102 13:14:02.976244  295944 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1102 13:14:02.976349  295944 retry.go:31] will retry after 327.15464ms: ssh: handshake failed: EOF
	I1102 13:14:02.992796  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	W1102 13:14:02.993889  295944 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1102 13:14:02.993910  295944 retry.go:31] will retry after 193.34196ms: ssh: handshake failed: EOF
	I1102 13:14:03.141203  295944 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1102 13:14:03.141234  295944 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1102 13:14:03.187025  295944 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1102 13:14:03.187052  295944 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1102 13:14:03.303784  295944 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1102 13:14:03.303851  295944 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1102 13:14:03.338480  295944 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1102 13:14:03.338548  295944 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1102 13:14:03.380749  295944 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1102 13:14:03.380822  295944 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1102 13:14:03.417652  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:14:03.437409  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1102 13:14:03.439615  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1102 13:14:03.464003  295944 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1102 13:14:03.464069  295944 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1102 13:14:03.517543  295944 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1102 13:14:03.517579  295944 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1102 13:14:03.517891  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1102 13:14:03.524793  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1102 13:14:03.566533  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1102 13:14:03.578708  295944 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1102 13:14:03.578729  295944 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1102 13:14:03.584320  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 13:14:03.615307  295944 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1102 13:14:03.615341  295944 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1102 13:14:03.650562  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1102 13:14:03.653242  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1102 13:14:03.657036  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1102 13:14:03.713353  295944 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1102 13:14:03.713419  295944 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1102 13:14:03.725093  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1102 13:14:03.736229  295944 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1102 13:14:03.736256  295944 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1102 13:14:03.784291  295944 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1102 13:14:03.784323  295944 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1102 13:14:03.819338  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1102 13:14:03.898933  295944 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1102 13:14:03.898970  295944 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1102 13:14:03.926828  295944 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1102 13:14:03.926862  295944 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1102 13:14:03.974188  295944 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1102 13:14:03.974233  295944 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1102 13:14:04.142952  295944 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1102 13:14:04.142996  295944 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1102 13:14:04.221317  295944 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1102 13:14:04.221344  295944 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1102 13:14:04.232455  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1102 13:14:04.541334  295944 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1102 13:14:04.541377  295944 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1102 13:14:04.545223  295944 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1102 13:14:04.545249  295944 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1102 13:14:04.816681  295944 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.156128835s)
	I1102 13:14:04.816723  295944 start.go:1013] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1102 13:14:04.817593  295944 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.85351379s)
	I1102 13:14:04.818794  295944 node_ready.go:35] waiting up to 6m0s for node "addons-230560" to be "Ready" ...
	I1102 13:14:04.871352  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1102 13:14:04.875852  295944 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1102 13:14:04.875874  295944 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1102 13:14:05.133225  295944 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1102 13:14:05.133250  295944 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1102 13:14:05.332692  295944 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-230560" context rescaled to 1 replicas
	I1102 13:14:05.385938  295944 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1102 13:14:05.385960  295944 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1102 13:14:05.601591  295944 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1102 13:14:05.601615  295944 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1102 13:14:05.740385  295944 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1102 13:14:05.740411  295944 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1102 13:14:05.992078  295944 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1102 13:14:05.992103  295944 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1102 13:14:06.219155  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1102 13:14:06.874065  295944 node_ready.go:57] node "addons-230560" has "Ready":"False" status (will retry)
	I1102 13:14:07.574906  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.137422577s)
	I1102 13:14:07.575122  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.135446385s)
	I1102 13:14:07.575153  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.05724552s)
	I1102 13:14:07.575184  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.050362766s)
	I1102 13:14:07.575211  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.008648612s)
	I1102 13:14:07.575219  295944 addons.go:480] Verifying addon registry=true in "addons-230560"
	I1102 13:14:07.575418  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.157699571s)
	I1102 13:14:07.578245  295944 out.go:179] * Verifying registry addon...
	I1102 13:14:07.581989  295944 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1102 13:14:07.618951  295944 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1102 13:14:07.618977  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:07.896356  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.311995409s)
	W1102 13:14:07.896390  295944 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:07.896408  295944 retry.go:31] will retry after 350.451249ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:07.896442  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.245862484s)
	I1102 13:14:07.896492  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.243230776s)
	I1102 13:14:07.896533  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.239476128s)
	I1102 13:14:07.899736  295944 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-230560 service yakd-dashboard -n yakd-dashboard
	
	I1102 13:14:08.103549  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:08.247180  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 13:14:08.618977  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:08.631510  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.906339679s)
	I1102 13:14:08.631687  295944 addons.go:480] Verifying addon ingress=true in "addons-230560"
	I1102 13:14:08.631801  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.39930698s)
	I1102 13:14:08.632134  295944 addons.go:480] Verifying addon metrics-server=true in "addons-230560"
	I1102 13:14:08.632039  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.760646867s)
	W1102 13:14:08.632270  295944 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1102 13:14:08.632295  295944 retry.go:31] will retry after 303.155567ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1102 13:14:08.631626  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.812261703s)
	I1102 13:14:08.636919  295944 out.go:179] * Verifying ingress addon...
	I1102 13:14:08.640804  295944 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1102 13:14:08.697766  295944 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1102 13:14:08.697841  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:08.936004  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1102 13:14:09.106874  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:09.135161  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.915947042s)
	I1102 13:14:09.135245  295944 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-230560"
	I1102 13:14:09.138268  295944 out.go:179] * Verifying csi-hostpath-driver addon...
	I1102 13:14:09.142247  295944 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1102 13:14:09.199030  295944 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1102 13:14:09.199100  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:09.199494  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1102 13:14:09.322596  295944 node_ready.go:57] node "addons-230560" has "Ready":"False" status (will retry)
	I1102 13:14:09.508971  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.261754261s)
	W1102 13:14:09.509007  295944 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:09.509033  295944 retry.go:31] will retry after 354.750401ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:09.585431  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:09.644351  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:09.645327  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:09.864489  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 13:14:10.086652  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:10.147738  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:10.148211  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:10.280211  295944 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1102 13:14:10.280360  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:14:10.303445  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:14:10.446007  295944 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1102 13:14:10.464388  295944 addons.go:239] Setting addon gcp-auth=true in "addons-230560"
	I1102 13:14:10.464481  295944 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:14:10.464974  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:14:10.494255  295944 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1102 13:14:10.494313  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:14:10.520602  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:14:10.585368  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:10.644692  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:10.646129  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:11.085619  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:11.144870  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:11.146008  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:11.586205  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:11.644965  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:11.646140  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 13:14:11.822645  295944 node_ready.go:57] node "addons-230560" has "Ready":"False" status (will retry)
	I1102 13:14:11.834689  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.898590789s)
	I1102 13:14:11.834802  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.970277124s)
	W1102 13:14:11.834830  295944 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:11.834828  295944 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.340548443s)
	I1102 13:14:11.834847  295944 retry.go:31] will retry after 416.455245ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:11.838008  295944 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1102 13:14:11.840933  295944 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1102 13:14:11.843870  295944 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1102 13:14:11.843901  295944 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1102 13:14:11.856988  295944 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1102 13:14:11.857010  295944 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1102 13:14:11.870418  295944 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1102 13:14:11.870442  295944 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1102 13:14:11.884119  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1102 13:14:12.085523  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:12.146032  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:12.146845  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:12.251919  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 13:14:12.433179  295944 addons.go:480] Verifying addon gcp-auth=true in "addons-230560"
	I1102 13:14:12.436419  295944 out.go:179] * Verifying gcp-auth addon...
	I1102 13:14:12.440094  295944 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1102 13:14:12.459137  295944 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1102 13:14:12.459163  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:12.585301  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:12.645925  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:12.647586  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:12.943836  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:13.085605  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:13.146937  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:13.147826  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1102 13:14:13.174911  295944 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:13.174948  295944 retry.go:31] will retry after 696.1039ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:13.443962  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:13.586258  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:13.644841  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:13.645399  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:13.871768  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 13:14:13.945535  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:14.085257  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:14.146793  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:14.147417  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1102 13:14:14.322563  295944 node_ready.go:57] node "addons-230560" has "Ready":"False" status (will retry)
	I1102 13:14:14.444173  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:14.584921  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:14.644980  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:14.645684  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1102 13:14:14.662138  295944 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:14.662173  295944 retry.go:31] will retry after 1.32068964s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:14.943255  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:15.085577  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:15.146313  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:15.150531  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:15.444249  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:15.585566  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:15.644737  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:15.644473  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:15.944002  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:15.983122  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 13:14:16.085502  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:16.145449  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:16.147308  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:16.443784  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:16.585516  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:16.646735  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:16.647028  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1102 13:14:16.806020  295944 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:16.806053  295944 retry.go:31] will retry after 1.56368671s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1102 13:14:16.821806  295944 node_ready.go:57] node "addons-230560" has "Ready":"False" status (will retry)
	I1102 13:14:16.943636  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:17.085635  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:17.144931  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:17.147457  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:17.444316  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:17.585697  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:17.644976  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:17.647350  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:17.943045  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:18.085450  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:18.144951  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:18.146116  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:18.370161  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 13:14:18.448279  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:18.585199  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:18.646229  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:18.646673  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1102 13:14:18.822138  295944 node_ready.go:57] node "addons-230560" has "Ready":"False" status (will retry)
	I1102 13:14:18.943560  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:19.085842  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:19.145834  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:19.146164  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1102 13:14:19.183059  295944 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:19.183139  295944 retry.go:31] will retry after 4.205499652s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:19.444203  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:19.585066  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:19.643849  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:19.645512  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:19.943228  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:20.085993  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:20.144207  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:20.146341  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:20.442943  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:20.584778  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:20.645783  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:20.645832  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:20.943698  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:21.085776  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:21.144658  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:21.145508  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 13:14:21.322510  295944 node_ready.go:57] node "addons-230560" has "Ready":"False" status (will retry)
	I1102 13:14:21.443593  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:21.585342  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:21.644440  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:21.645294  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:21.943280  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:22.085155  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:22.143922  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:22.146149  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:22.443982  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:22.584950  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:22.644614  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:22.645693  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:22.943376  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:23.085242  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:23.144721  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:23.146396  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:23.389694  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 13:14:23.443714  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:23.585711  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:23.646249  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:23.648385  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 13:14:23.822747  295944 node_ready.go:57] node "addons-230560" has "Ready":"False" status (will retry)
	I1102 13:14:23.944601  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:24.085393  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:24.144822  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:24.146172  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 13:14:24.207129  295944 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:24.207212  295944 retry.go:31] will retry after 4.940916324s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:24.443041  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:24.584678  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:24.644831  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:24.645528  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:24.943739  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:25.085082  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:25.147121  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:25.147502  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:25.443941  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:25.586106  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:25.644794  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:25.645480  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:25.943648  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:26.085550  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:26.144712  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:26.145316  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 13:14:26.322312  295944 node_ready.go:57] node "addons-230560" has "Ready":"False" status (will retry)
	I1102 13:14:26.443107  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:26.585014  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:26.644803  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:26.645844  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:26.943888  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:27.085976  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:27.144687  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:27.144915  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:27.444096  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:27.585228  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:27.644118  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:27.645249  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:27.943733  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:28.085628  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:28.147579  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:28.147699  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 13:14:28.330975  295944 node_ready.go:57] node "addons-230560" has "Ready":"False" status (will retry)
	I1102 13:14:28.442840  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:28.585764  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:28.644689  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:28.646406  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:28.943348  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:29.085143  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:29.143851  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:29.146207  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:29.148481  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 13:14:29.443579  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:29.586123  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:29.645656  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:29.645958  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:29.944060  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1102 13:14:29.992871  295944 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:29.992966  295944 retry.go:31] will retry after 9.501925941s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:30.088974  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:30.144591  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:30.147188  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:30.443011  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:30.585858  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:30.644256  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:30.646231  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 13:14:30.822450  295944 node_ready.go:57] node "addons-230560" has "Ready":"False" status (will retry)
	I1102 13:14:30.943507  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:31.085849  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:31.144859  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:31.145882  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:31.451577  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:31.591748  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:31.643712  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:31.646200  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:31.943286  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:32.085358  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:32.145611  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:32.145739  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:32.444046  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:32.585194  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:32.644466  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:32.646055  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 13:14:32.822695  295944 node_ready.go:57] node "addons-230560" has "Ready":"False" status (will retry)
	I1102 13:14:32.943880  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:33.085818  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:33.145579  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:33.145702  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:33.443229  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:33.585109  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:33.644314  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:33.646744  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:33.943450  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:34.085573  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:34.144791  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:34.145857  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:34.444064  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:34.585117  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:34.644659  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:34.645793  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:34.943962  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:35.085493  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:35.144911  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:35.146362  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 13:14:35.322954  295944 node_ready.go:57] node "addons-230560" has "Ready":"False" status (will retry)
	I1102 13:14:35.443759  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:35.585972  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:35.643794  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:35.645662  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:35.946880  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:36.085794  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:36.143817  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:36.146013  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:36.442972  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:36.584672  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:36.644803  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:36.644897  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:36.943293  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:37.085313  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:37.145443  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:37.145636  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:37.443711  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:37.589993  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:37.644335  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:37.645980  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 13:14:37.821705  295944 node_ready.go:57] node "addons-230560" has "Ready":"False" status (will retry)
	I1102 13:14:37.943507  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:38.085370  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:38.145680  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:38.146086  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:38.443640  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:38.585476  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:38.645870  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:38.646014  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:38.943438  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:39.085602  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:39.145871  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:39.146181  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:39.443238  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:39.495508  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 13:14:39.585561  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:39.646602  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:39.647089  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1102 13:14:39.822560  295944 node_ready.go:57] node "addons-230560" has "Ready":"False" status (will retry)
	I1102 13:14:39.943427  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:40.085780  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:40.147184  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:40.148019  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1102 13:14:40.311378  295944 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:40.311408  295944 retry.go:31] will retry after 8.384258682s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:40.443295  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:40.585233  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:40.644873  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:40.645533  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:40.943650  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:41.087361  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:41.144177  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:41.145696  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:41.443713  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:41.585751  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:41.645647  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:41.645873  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:41.943711  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:42.086084  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:42.145635  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:42.149458  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 13:14:42.322531  295944 node_ready.go:57] node "addons-230560" has "Ready":"False" status (will retry)
	I1102 13:14:42.443584  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:42.585844  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:42.644635  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:42.645692  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:42.943537  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:43.085473  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:43.145626  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:43.145834  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:43.443550  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:43.585540  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:43.644678  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:43.646336  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:43.943156  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:44.085310  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:44.145383  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:44.145514  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 13:14:44.322660  295944 node_ready.go:57] node "addons-230560" has "Ready":"False" status (will retry)
	I1102 13:14:44.523473  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:44.595783  295944 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1102 13:14:44.595810  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:44.654532  295944 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1102 13:14:44.654558  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:44.655735  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:44.884875  295944 node_ready.go:49] node "addons-230560" is "Ready"
	I1102 13:14:44.884908  295944 node_ready.go:38] duration metric: took 40.066089209s for node "addons-230560" to be "Ready" ...
	I1102 13:14:44.884934  295944 api_server.go:52] waiting for apiserver process to appear ...
	I1102 13:14:44.884998  295944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 13:14:44.910534  295944 api_server.go:72] duration metric: took 42.661884889s to wait for apiserver process to appear ...
	I1102 13:14:44.910562  295944 api_server.go:88] waiting for apiserver healthz status ...
	I1102 13:14:44.910583  295944 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1102 13:14:44.921263  295944 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1102 13:14:44.922555  295944 api_server.go:141] control plane version: v1.34.1
	I1102 13:14:44.922584  295944 api_server.go:131] duration metric: took 12.011459ms to wait for apiserver health ...
	I1102 13:14:44.922593  295944 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 13:14:44.933363  295944 system_pods.go:59] 19 kube-system pods found
	I1102 13:14:44.933400  295944 system_pods.go:61] "coredns-66bc5c9577-6rft9" [5b0e5e4b-ac40-44ba-8e2b-3f54328cc03c] Pending
	I1102 13:14:44.933412  295944 system_pods.go:61] "csi-hostpath-attacher-0" [86982496-2936-427c-8bd2-143ec9d85d4b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1102 13:14:44.933418  295944 system_pods.go:61] "csi-hostpath-resizer-0" [5254ca1f-81fa-460f-b1f4-b29debc9a19c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1102 13:14:44.933425  295944 system_pods.go:61] "csi-hostpathplugin-gnxtb" [8ffd4cf3-a402-41bb-882b-c2bbcfa42b1d] Pending
	I1102 13:14:44.933431  295944 system_pods.go:61] "etcd-addons-230560" [a35f5cb3-d29f-45ec-b91d-18d67639c4c8] Running
	I1102 13:14:44.933435  295944 system_pods.go:61] "kindnet-5dpxs" [577f8605-d5ec-4adc-aae2-56c098398734] Running
	I1102 13:14:44.933442  295944 system_pods.go:61] "kube-apiserver-addons-230560" [ec8d7fe3-227e-43c7-b35f-12f3e6c724a5] Running
	I1102 13:14:44.933446  295944 system_pods.go:61] "kube-controller-manager-addons-230560" [910be431-8bbf-4929-bce6-6d85b250063c] Running
	I1102 13:14:44.933459  295944 system_pods.go:61] "kube-ingress-dns-minikube" [afb0b7f8-4856-42f1-871f-197c757927fa] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1102 13:14:44.933470  295944 system_pods.go:61] "kube-proxy-dzts7" [b63fb7f2-7a3e-4ee4-92d8-6ee0a88acebb] Running
	I1102 13:14:44.933475  295944 system_pods.go:61] "kube-scheduler-addons-230560" [c869faf3-5fdf-4dd6-b3ae-a5c03b7b9ca3] Running
	I1102 13:14:44.933482  295944 system_pods.go:61] "metrics-server-85b7d694d7-npk5l" [828b803a-a751-44ee-9dfe-b0ffbba104f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1102 13:14:44.933493  295944 system_pods.go:61] "nvidia-device-plugin-daemonset-qkqx4" [8db4aae3-2657-4773-b5b6-62fb681edaa0] Pending
	I1102 13:14:44.933500  295944 system_pods.go:61] "registry-6b586f9694-qlm8d" [d973110c-93dd-4878-bcf2-c23a761ada84] Pending
	I1102 13:14:44.933505  295944 system_pods.go:61] "registry-creds-764b6fb674-5ssmw" [ad1063de-4f14-47bb-a909-fea786b4406a] Pending
	I1102 13:14:44.933514  295944 system_pods.go:61] "registry-proxy-gk6xb" [bd4d6b0b-09b4-4d00-8a1f-01347f478af8] Pending
	I1102 13:14:44.933521  295944 system_pods.go:61] "snapshot-controller-7d9fbc56b8-rrmp6" [958df77e-6b38-4b22-b999-53d4f5e9d784] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1102 13:14:44.933536  295944 system_pods.go:61] "snapshot-controller-7d9fbc56b8-v88xw" [95d508cf-77db-4d04-aed4-ee3059235c7f] Pending
	I1102 13:14:44.933542  295944 system_pods.go:61] "storage-provisioner" [d2ef88eb-9da4-47ad-b13b-231eb6b4242b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 13:14:44.933548  295944 system_pods.go:74] duration metric: took 10.949827ms to wait for pod list to return data ...
	I1102 13:14:44.933562  295944 default_sa.go:34] waiting for default service account to be created ...
	I1102 13:14:44.945192  295944 default_sa.go:45] found service account: "default"
	I1102 13:14:44.945229  295944 default_sa.go:55] duration metric: took 11.652529ms for default service account to be created ...
	I1102 13:14:44.945240  295944 system_pods.go:116] waiting for k8s-apps to be running ...
	I1102 13:14:44.951922  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:44.953764  295944 system_pods.go:86] 19 kube-system pods found
	I1102 13:14:44.953813  295944 system_pods.go:89] "coredns-66bc5c9577-6rft9" [5b0e5e4b-ac40-44ba-8e2b-3f54328cc03c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:14:44.953825  295944 system_pods.go:89] "csi-hostpath-attacher-0" [86982496-2936-427c-8bd2-143ec9d85d4b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1102 13:14:44.953833  295944 system_pods.go:89] "csi-hostpath-resizer-0" [5254ca1f-81fa-460f-b1f4-b29debc9a19c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1102 13:14:44.953837  295944 system_pods.go:89] "csi-hostpathplugin-gnxtb" [8ffd4cf3-a402-41bb-882b-c2bbcfa42b1d] Pending
	I1102 13:14:44.953842  295944 system_pods.go:89] "etcd-addons-230560" [a35f5cb3-d29f-45ec-b91d-18d67639c4c8] Running
	I1102 13:14:44.953847  295944 system_pods.go:89] "kindnet-5dpxs" [577f8605-d5ec-4adc-aae2-56c098398734] Running
	I1102 13:14:44.953866  295944 system_pods.go:89] "kube-apiserver-addons-230560" [ec8d7fe3-227e-43c7-b35f-12f3e6c724a5] Running
	I1102 13:14:44.953877  295944 system_pods.go:89] "kube-controller-manager-addons-230560" [910be431-8bbf-4929-bce6-6d85b250063c] Running
	I1102 13:14:44.953885  295944 system_pods.go:89] "kube-ingress-dns-minikube" [afb0b7f8-4856-42f1-871f-197c757927fa] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1102 13:14:44.953897  295944 system_pods.go:89] "kube-proxy-dzts7" [b63fb7f2-7a3e-4ee4-92d8-6ee0a88acebb] Running
	I1102 13:14:44.953902  295944 system_pods.go:89] "kube-scheduler-addons-230560" [c869faf3-5fdf-4dd6-b3ae-a5c03b7b9ca3] Running
	I1102 13:14:44.953909  295944 system_pods.go:89] "metrics-server-85b7d694d7-npk5l" [828b803a-a751-44ee-9dfe-b0ffbba104f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1102 13:14:44.953917  295944 system_pods.go:89] "nvidia-device-plugin-daemonset-qkqx4" [8db4aae3-2657-4773-b5b6-62fb681edaa0] Pending
	I1102 13:14:44.953921  295944 system_pods.go:89] "registry-6b586f9694-qlm8d" [d973110c-93dd-4878-bcf2-c23a761ada84] Pending
	I1102 13:14:44.953925  295944 system_pods.go:89] "registry-creds-764b6fb674-5ssmw" [ad1063de-4f14-47bb-a909-fea786b4406a] Pending
	I1102 13:14:44.953936  295944 system_pods.go:89] "registry-proxy-gk6xb" [bd4d6b0b-09b4-4d00-8a1f-01347f478af8] Pending
	I1102 13:14:44.953945  295944 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rrmp6" [958df77e-6b38-4b22-b999-53d4f5e9d784] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1102 13:14:44.953954  295944 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v88xw" [95d508cf-77db-4d04-aed4-ee3059235c7f] Pending
	I1102 13:14:44.953966  295944 system_pods.go:89] "storage-provisioner" [d2ef88eb-9da4-47ad-b13b-231eb6b4242b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 13:14:44.953981  295944 retry.go:31] will retry after 254.75903ms: missing components: kube-dns
	I1102 13:14:45.087066  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:45.179074  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:45.179600  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:45.241422  295944 system_pods.go:86] 19 kube-system pods found
	I1102 13:14:45.241490  295944 system_pods.go:89] "coredns-66bc5c9577-6rft9" [5b0e5e4b-ac40-44ba-8e2b-3f54328cc03c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:14:45.241501  295944 system_pods.go:89] "csi-hostpath-attacher-0" [86982496-2936-427c-8bd2-143ec9d85d4b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1102 13:14:45.241510  295944 system_pods.go:89] "csi-hostpath-resizer-0" [5254ca1f-81fa-460f-b1f4-b29debc9a19c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1102 13:14:45.241514  295944 system_pods.go:89] "csi-hostpathplugin-gnxtb" [8ffd4cf3-a402-41bb-882b-c2bbcfa42b1d] Pending
	I1102 13:14:45.241519  295944 system_pods.go:89] "etcd-addons-230560" [a35f5cb3-d29f-45ec-b91d-18d67639c4c8] Running
	I1102 13:14:45.241524  295944 system_pods.go:89] "kindnet-5dpxs" [577f8605-d5ec-4adc-aae2-56c098398734] Running
	I1102 13:14:45.241530  295944 system_pods.go:89] "kube-apiserver-addons-230560" [ec8d7fe3-227e-43c7-b35f-12f3e6c724a5] Running
	I1102 13:14:45.241534  295944 system_pods.go:89] "kube-controller-manager-addons-230560" [910be431-8bbf-4929-bce6-6d85b250063c] Running
	I1102 13:14:45.241543  295944 system_pods.go:89] "kube-ingress-dns-minikube" [afb0b7f8-4856-42f1-871f-197c757927fa] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1102 13:14:45.241558  295944 system_pods.go:89] "kube-proxy-dzts7" [b63fb7f2-7a3e-4ee4-92d8-6ee0a88acebb] Running
	I1102 13:14:45.241572  295944 system_pods.go:89] "kube-scheduler-addons-230560" [c869faf3-5fdf-4dd6-b3ae-a5c03b7b9ca3] Running
	I1102 13:14:45.241579  295944 system_pods.go:89] "metrics-server-85b7d694d7-npk5l" [828b803a-a751-44ee-9dfe-b0ffbba104f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1102 13:14:45.241584  295944 system_pods.go:89] "nvidia-device-plugin-daemonset-qkqx4" [8db4aae3-2657-4773-b5b6-62fb681edaa0] Pending
	I1102 13:14:45.241598  295944 system_pods.go:89] "registry-6b586f9694-qlm8d" [d973110c-93dd-4878-bcf2-c23a761ada84] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1102 13:14:45.241605  295944 system_pods.go:89] "registry-creds-764b6fb674-5ssmw" [ad1063de-4f14-47bb-a909-fea786b4406a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1102 13:14:45.241619  295944 system_pods.go:89] "registry-proxy-gk6xb" [bd4d6b0b-09b4-4d00-8a1f-01347f478af8] Pending
	I1102 13:14:45.241633  295944 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rrmp6" [958df77e-6b38-4b22-b999-53d4f5e9d784] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1102 13:14:45.241641  295944 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v88xw" [95d508cf-77db-4d04-aed4-ee3059235c7f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1102 13:14:45.241649  295944 system_pods.go:89] "storage-provisioner" [d2ef88eb-9da4-47ad-b13b-231eb6b4242b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 13:14:45.241673  295944 retry.go:31] will retry after 314.788714ms: missing components: kube-dns
	I1102 13:14:45.444827  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:45.563675  295944 system_pods.go:86] 19 kube-system pods found
	I1102 13:14:45.563721  295944 system_pods.go:89] "coredns-66bc5c9577-6rft9" [5b0e5e4b-ac40-44ba-8e2b-3f54328cc03c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:14:45.563731  295944 system_pods.go:89] "csi-hostpath-attacher-0" [86982496-2936-427c-8bd2-143ec9d85d4b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1102 13:14:45.563739  295944 system_pods.go:89] "csi-hostpath-resizer-0" [5254ca1f-81fa-460f-b1f4-b29debc9a19c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1102 13:14:45.563746  295944 system_pods.go:89] "csi-hostpathplugin-gnxtb" [8ffd4cf3-a402-41bb-882b-c2bbcfa42b1d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1102 13:14:45.563756  295944 system_pods.go:89] "etcd-addons-230560" [a35f5cb3-d29f-45ec-b91d-18d67639c4c8] Running
	I1102 13:14:45.563761  295944 system_pods.go:89] "kindnet-5dpxs" [577f8605-d5ec-4adc-aae2-56c098398734] Running
	I1102 13:14:45.563772  295944 system_pods.go:89] "kube-apiserver-addons-230560" [ec8d7fe3-227e-43c7-b35f-12f3e6c724a5] Running
	I1102 13:14:45.563792  295944 system_pods.go:89] "kube-controller-manager-addons-230560" [910be431-8bbf-4929-bce6-6d85b250063c] Running
	I1102 13:14:45.563799  295944 system_pods.go:89] "kube-ingress-dns-minikube" [afb0b7f8-4856-42f1-871f-197c757927fa] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1102 13:14:45.563805  295944 system_pods.go:89] "kube-proxy-dzts7" [b63fb7f2-7a3e-4ee4-92d8-6ee0a88acebb] Running
	I1102 13:14:45.563814  295944 system_pods.go:89] "kube-scheduler-addons-230560" [c869faf3-5fdf-4dd6-b3ae-a5c03b7b9ca3] Running
	I1102 13:14:45.563821  295944 system_pods.go:89] "metrics-server-85b7d694d7-npk5l" [828b803a-a751-44ee-9dfe-b0ffbba104f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1102 13:14:45.563837  295944 system_pods.go:89] "nvidia-device-plugin-daemonset-qkqx4" [8db4aae3-2657-4773-b5b6-62fb681edaa0] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1102 13:14:45.563845  295944 system_pods.go:89] "registry-6b586f9694-qlm8d" [d973110c-93dd-4878-bcf2-c23a761ada84] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1102 13:14:45.563860  295944 system_pods.go:89] "registry-creds-764b6fb674-5ssmw" [ad1063de-4f14-47bb-a909-fea786b4406a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1102 13:14:45.563871  295944 system_pods.go:89] "registry-proxy-gk6xb" [bd4d6b0b-09b4-4d00-8a1f-01347f478af8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1102 13:14:45.563878  295944 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rrmp6" [958df77e-6b38-4b22-b999-53d4f5e9d784] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1102 13:14:45.563891  295944 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v88xw" [95d508cf-77db-4d04-aed4-ee3059235c7f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1102 13:14:45.563900  295944 system_pods.go:89] "storage-provisioner" [d2ef88eb-9da4-47ad-b13b-231eb6b4242b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 13:14:45.563920  295944 retry.go:31] will retry after 340.133045ms: missing components: kube-dns
	I1102 13:14:45.586028  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:45.646590  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:45.647236  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:45.919002  295944 system_pods.go:86] 19 kube-system pods found
	I1102 13:14:45.919038  295944 system_pods.go:89] "coredns-66bc5c9577-6rft9" [5b0e5e4b-ac40-44ba-8e2b-3f54328cc03c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:14:45.919049  295944 system_pods.go:89] "csi-hostpath-attacher-0" [86982496-2936-427c-8bd2-143ec9d85d4b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1102 13:14:45.919064  295944 system_pods.go:89] "csi-hostpath-resizer-0" [5254ca1f-81fa-460f-b1f4-b29debc9a19c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1102 13:14:45.919072  295944 system_pods.go:89] "csi-hostpathplugin-gnxtb" [8ffd4cf3-a402-41bb-882b-c2bbcfa42b1d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1102 13:14:45.919077  295944 system_pods.go:89] "etcd-addons-230560" [a35f5cb3-d29f-45ec-b91d-18d67639c4c8] Running
	I1102 13:14:45.919082  295944 system_pods.go:89] "kindnet-5dpxs" [577f8605-d5ec-4adc-aae2-56c098398734] Running
	I1102 13:14:45.919087  295944 system_pods.go:89] "kube-apiserver-addons-230560" [ec8d7fe3-227e-43c7-b35f-12f3e6c724a5] Running
	I1102 13:14:45.919095  295944 system_pods.go:89] "kube-controller-manager-addons-230560" [910be431-8bbf-4929-bce6-6d85b250063c] Running
	I1102 13:14:45.919102  295944 system_pods.go:89] "kube-ingress-dns-minikube" [afb0b7f8-4856-42f1-871f-197c757927fa] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1102 13:14:45.919111  295944 system_pods.go:89] "kube-proxy-dzts7" [b63fb7f2-7a3e-4ee4-92d8-6ee0a88acebb] Running
	I1102 13:14:45.919117  295944 system_pods.go:89] "kube-scheduler-addons-230560" [c869faf3-5fdf-4dd6-b3ae-a5c03b7b9ca3] Running
	I1102 13:14:45.919124  295944 system_pods.go:89] "metrics-server-85b7d694d7-npk5l" [828b803a-a751-44ee-9dfe-b0ffbba104f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1102 13:14:45.919140  295944 system_pods.go:89] "nvidia-device-plugin-daemonset-qkqx4" [8db4aae3-2657-4773-b5b6-62fb681edaa0] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1102 13:14:45.919152  295944 system_pods.go:89] "registry-6b586f9694-qlm8d" [d973110c-93dd-4878-bcf2-c23a761ada84] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1102 13:14:45.919161  295944 system_pods.go:89] "registry-creds-764b6fb674-5ssmw" [ad1063de-4f14-47bb-a909-fea786b4406a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1102 13:14:45.919171  295944 system_pods.go:89] "registry-proxy-gk6xb" [bd4d6b0b-09b4-4d00-8a1f-01347f478af8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1102 13:14:45.919178  295944 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rrmp6" [958df77e-6b38-4b22-b999-53d4f5e9d784] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1102 13:14:45.919184  295944 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v88xw" [95d508cf-77db-4d04-aed4-ee3059235c7f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1102 13:14:45.919192  295944 system_pods.go:89] "storage-provisioner" [d2ef88eb-9da4-47ad-b13b-231eb6b4242b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 13:14:45.919216  295944 retry.go:31] will retry after 413.151336ms: missing components: kube-dns
	I1102 13:14:45.944882  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:46.087140  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:46.144664  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:46.146454  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:46.341305  295944 system_pods.go:86] 19 kube-system pods found
	I1102 13:14:46.341353  295944 system_pods.go:89] "coredns-66bc5c9577-6rft9" [5b0e5e4b-ac40-44ba-8e2b-3f54328cc03c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:14:46.341365  295944 system_pods.go:89] "csi-hostpath-attacher-0" [86982496-2936-427c-8bd2-143ec9d85d4b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1102 13:14:46.341380  295944 system_pods.go:89] "csi-hostpath-resizer-0" [5254ca1f-81fa-460f-b1f4-b29debc9a19c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1102 13:14:46.341392  295944 system_pods.go:89] "csi-hostpathplugin-gnxtb" [8ffd4cf3-a402-41bb-882b-c2bbcfa42b1d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1102 13:14:46.341405  295944 system_pods.go:89] "etcd-addons-230560" [a35f5cb3-d29f-45ec-b91d-18d67639c4c8] Running
	I1102 13:14:46.341429  295944 system_pods.go:89] "kindnet-5dpxs" [577f8605-d5ec-4adc-aae2-56c098398734] Running
	I1102 13:14:46.341434  295944 system_pods.go:89] "kube-apiserver-addons-230560" [ec8d7fe3-227e-43c7-b35f-12f3e6c724a5] Running
	I1102 13:14:46.341444  295944 system_pods.go:89] "kube-controller-manager-addons-230560" [910be431-8bbf-4929-bce6-6d85b250063c] Running
	I1102 13:14:46.341455  295944 system_pods.go:89] "kube-ingress-dns-minikube" [afb0b7f8-4856-42f1-871f-197c757927fa] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1102 13:14:46.341461  295944 system_pods.go:89] "kube-proxy-dzts7" [b63fb7f2-7a3e-4ee4-92d8-6ee0a88acebb] Running
	I1102 13:14:46.341466  295944 system_pods.go:89] "kube-scheduler-addons-230560" [c869faf3-5fdf-4dd6-b3ae-a5c03b7b9ca3] Running
	I1102 13:14:46.341473  295944 system_pods.go:89] "metrics-server-85b7d694d7-npk5l" [828b803a-a751-44ee-9dfe-b0ffbba104f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1102 13:14:46.341493  295944 system_pods.go:89] "nvidia-device-plugin-daemonset-qkqx4" [8db4aae3-2657-4773-b5b6-62fb681edaa0] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1102 13:14:46.341507  295944 system_pods.go:89] "registry-6b586f9694-qlm8d" [d973110c-93dd-4878-bcf2-c23a761ada84] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1102 13:14:46.341518  295944 system_pods.go:89] "registry-creds-764b6fb674-5ssmw" [ad1063de-4f14-47bb-a909-fea786b4406a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1102 13:14:46.341529  295944 system_pods.go:89] "registry-proxy-gk6xb" [bd4d6b0b-09b4-4d00-8a1f-01347f478af8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1102 13:14:46.341536  295944 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rrmp6" [958df77e-6b38-4b22-b999-53d4f5e9d784] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1102 13:14:46.341551  295944 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v88xw" [95d508cf-77db-4d04-aed4-ee3059235c7f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1102 13:14:46.341564  295944 system_pods.go:89] "storage-provisioner" [d2ef88eb-9da4-47ad-b13b-231eb6b4242b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 13:14:46.341584  295944 retry.go:31] will retry after 690.623926ms: missing components: kube-dns
	I1102 13:14:46.466071  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:46.589269  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:46.689556  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:46.689733  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:46.943706  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:47.038475  295944 system_pods.go:86] 19 kube-system pods found
	I1102 13:14:47.038514  295944 system_pods.go:89] "coredns-66bc5c9577-6rft9" [5b0e5e4b-ac40-44ba-8e2b-3f54328cc03c] Running
	I1102 13:14:47.038526  295944 system_pods.go:89] "csi-hostpath-attacher-0" [86982496-2936-427c-8bd2-143ec9d85d4b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1102 13:14:47.038536  295944 system_pods.go:89] "csi-hostpath-resizer-0" [5254ca1f-81fa-460f-b1f4-b29debc9a19c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1102 13:14:47.038544  295944 system_pods.go:89] "csi-hostpathplugin-gnxtb" [8ffd4cf3-a402-41bb-882b-c2bbcfa42b1d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1102 13:14:47.038554  295944 system_pods.go:89] "etcd-addons-230560" [a35f5cb3-d29f-45ec-b91d-18d67639c4c8] Running
	I1102 13:14:47.038560  295944 system_pods.go:89] "kindnet-5dpxs" [577f8605-d5ec-4adc-aae2-56c098398734] Running
	I1102 13:14:47.038570  295944 system_pods.go:89] "kube-apiserver-addons-230560" [ec8d7fe3-227e-43c7-b35f-12f3e6c724a5] Running
	I1102 13:14:47.038581  295944 system_pods.go:89] "kube-controller-manager-addons-230560" [910be431-8bbf-4929-bce6-6d85b250063c] Running
	I1102 13:14:47.038592  295944 system_pods.go:89] "kube-ingress-dns-minikube" [afb0b7f8-4856-42f1-871f-197c757927fa] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1102 13:14:47.038596  295944 system_pods.go:89] "kube-proxy-dzts7" [b63fb7f2-7a3e-4ee4-92d8-6ee0a88acebb] Running
	I1102 13:14:47.038601  295944 system_pods.go:89] "kube-scheduler-addons-230560" [c869faf3-5fdf-4dd6-b3ae-a5c03b7b9ca3] Running
	I1102 13:14:47.038607  295944 system_pods.go:89] "metrics-server-85b7d694d7-npk5l" [828b803a-a751-44ee-9dfe-b0ffbba104f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1102 13:14:47.038647  295944 system_pods.go:89] "nvidia-device-plugin-daemonset-qkqx4" [8db4aae3-2657-4773-b5b6-62fb681edaa0] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1102 13:14:47.038654  295944 system_pods.go:89] "registry-6b586f9694-qlm8d" [d973110c-93dd-4878-bcf2-c23a761ada84] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1102 13:14:47.038660  295944 system_pods.go:89] "registry-creds-764b6fb674-5ssmw" [ad1063de-4f14-47bb-a909-fea786b4406a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1102 13:14:47.038667  295944 system_pods.go:89] "registry-proxy-gk6xb" [bd4d6b0b-09b4-4d00-8a1f-01347f478af8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1102 13:14:47.038680  295944 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rrmp6" [958df77e-6b38-4b22-b999-53d4f5e9d784] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1102 13:14:47.038689  295944 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v88xw" [95d508cf-77db-4d04-aed4-ee3059235c7f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1102 13:14:47.038702  295944 system_pods.go:89] "storage-provisioner" [d2ef88eb-9da4-47ad-b13b-231eb6b4242b] Running
	I1102 13:14:47.038718  295944 system_pods.go:126] duration metric: took 2.093464281s to wait for k8s-apps to be running ...
	I1102 13:14:47.038731  295944 system_svc.go:44] waiting for kubelet service to be running ....
	I1102 13:14:47.038793  295944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:14:47.054704  295944 system_svc.go:56] duration metric: took 15.953416ms WaitForService to wait for kubelet
	I1102 13:14:47.054743  295944 kubeadm.go:587] duration metric: took 44.806099673s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 13:14:47.054764  295944 node_conditions.go:102] verifying NodePressure condition ...
	I1102 13:14:47.058030  295944 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1102 13:14:47.058063  295944 node_conditions.go:123] node cpu capacity is 2
	I1102 13:14:47.058077  295944 node_conditions.go:105] duration metric: took 3.305914ms to run NodePressure ...
	I1102 13:14:47.058089  295944 start.go:242] waiting for startup goroutines ...
	I1102 13:14:47.085492  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:47.147449  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:47.147887  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:47.443134  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:47.585453  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:47.646681  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:47.648812  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:47.944035  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:48.085146  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:48.145912  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:48.146552  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:48.443887  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:48.585215  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:48.645656  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:48.646378  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:48.696636  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 13:14:48.943240  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:49.085946  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:49.144542  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:49.147374  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:49.443537  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:49.585917  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:49.647915  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:49.648898  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:49.706141  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.009417989s)
	W1102 13:14:49.706181  295944 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:49.706200  295944 retry.go:31] will retry after 18.837928504s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:49.943950  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:50.085755  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:50.145433  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:50.147535  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:50.443150  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:50.585076  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:50.645395  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:50.646053  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:50.943106  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:51.085602  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:51.147713  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:51.156074  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:51.442960  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:51.585087  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:51.646298  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:51.646573  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:51.946191  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:52.085854  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:52.145360  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:52.147404  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:52.444234  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:52.590493  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:52.647783  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:52.648225  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:52.946776  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:53.089565  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:53.151634  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:53.152962  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:53.445677  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:53.586910  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:53.648627  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:53.649377  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:53.944783  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:54.087376  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:54.147795  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:54.148312  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:54.444788  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:54.585990  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:54.687213  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:54.687217  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:54.958851  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:55.085907  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:55.146779  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:55.146999  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:55.445218  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:55.586269  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:55.646661  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:55.647091  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:55.943260  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:56.085668  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:56.146631  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:56.146885  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:56.443466  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:56.585687  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:56.646167  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:56.648457  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:56.943687  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:57.085933  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:57.144308  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:57.145165  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:57.443864  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:57.585197  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:57.645993  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:57.648367  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:57.944089  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:58.086043  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:58.145739  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:58.146213  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:58.444030  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:58.585257  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:58.645775  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:58.646839  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:58.943467  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:59.085381  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:59.146262  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:59.147787  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:59.444074  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:59.585422  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:59.647302  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:59.647867  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:59.943440  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:00.090318  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:00.148435  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:00.150753  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:00.451266  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:00.586190  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:00.657998  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:00.658389  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:00.948357  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:01.085964  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:01.147663  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:01.147828  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:01.444245  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:01.586152  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:01.648216  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:01.648955  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:01.944803  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:02.085667  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:02.146210  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:02.147781  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:02.443562  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:02.585524  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:02.645823  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:02.647243  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:02.943423  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:03.085986  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:03.144310  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:03.147418  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:03.446774  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:03.586760  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:03.644817  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:03.649650  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:03.944540  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:04.090192  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:04.147657  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:04.148059  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:04.443445  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:04.608632  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:04.649329  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:04.649794  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:04.944824  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:05.089511  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:05.154374  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:05.155231  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:05.443565  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:05.586536  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:05.646339  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:05.647845  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:05.944293  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:06.087049  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:06.201866  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:06.201995  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:06.443304  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:06.585855  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:06.644137  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:06.645107  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:06.948444  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:07.085778  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:07.145047  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:07.145695  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:07.443620  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:07.585467  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:07.644316  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:07.646568  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:07.944034  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:08.084929  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:08.145771  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:08.145952  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:08.446137  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:08.544273  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 13:15:08.586168  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:08.687424  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:08.687904  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:08.943112  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:09.086017  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:09.147272  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:09.147598  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:09.444973  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:09.574367  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.030048061s)
	W1102 13:15:09.574407  295944 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:15:09.574427  295944 retry.go:31] will retry after 28.801030851s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:15:09.586085  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:09.644729  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:09.645712  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:09.946120  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:10.086398  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:10.187545  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:10.187723  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:10.443882  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:10.586109  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:10.644771  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:10.646406  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:10.944153  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:11.086756  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:11.187403  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:11.187850  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:11.443907  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:11.585777  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:11.646250  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:11.646995  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:11.943281  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:12.089144  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:12.145950  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:12.146795  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:12.443643  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:12.585717  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:12.646298  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:12.647254  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:12.943557  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:13.085999  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:13.144776  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:13.146220  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:13.443460  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:13.585697  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:13.646883  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:13.647521  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:13.943841  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:14.086087  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:14.147272  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:14.148434  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:14.443776  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:14.585578  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:14.645174  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:14.646673  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:14.943901  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:15.085977  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:15.144615  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:15.148985  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:15.443750  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:15.586896  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:15.644949  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:15.646761  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:15.943951  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:16.085365  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:16.152092  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:16.152549  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:16.444204  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:16.585482  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:16.645874  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:16.647504  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:16.944198  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:17.089215  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:17.146547  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:17.146970  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:17.444164  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:17.586589  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:17.646736  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:17.646896  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:17.944061  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:18.085261  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:18.147911  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:18.150001  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:18.444171  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:18.585349  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:18.645430  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:18.646871  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:18.943011  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:19.085181  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:19.144314  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:19.147301  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:19.444413  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:19.586296  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:19.689896  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:19.690334  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:19.950208  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:20.086345  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:20.145262  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:20.147018  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:20.443610  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:20.585919  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:20.646369  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:20.651752  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:20.944615  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:21.086238  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:21.143987  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:21.145950  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:21.443252  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:21.585506  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:21.644747  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:21.645407  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:21.952370  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:22.087139  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:22.144078  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:22.144734  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:22.444257  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:22.585296  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:22.648145  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:22.648560  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:22.955917  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:23.085219  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:23.146419  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:23.148137  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:23.446177  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:23.585184  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:23.644523  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:23.646924  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:23.944137  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:24.085492  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:24.144695  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:24.147340  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:24.444368  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:24.585781  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:24.646460  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:24.647157  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:24.950972  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:25.085906  295944 kapi.go:107] duration metric: took 1m17.503911279s to wait for kubernetes.io/minikube-addons=registry ...
	I1102 13:15:25.144662  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:25.146953  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:25.444094  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:25.647205  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:25.647602  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:25.943927  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:26.144575  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:26.147205  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:26.443875  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:26.645229  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:26.646889  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:26.944172  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:27.144436  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:27.147842  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:27.443866  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:27.645346  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:27.647125  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:27.943820  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:28.144833  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:28.146876  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:28.444344  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:28.646752  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:28.646972  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:28.943320  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:29.146437  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:29.146601  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:29.444175  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:29.646516  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:29.647355  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:29.943247  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:30.145057  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:30.147220  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:30.445389  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:30.644599  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:30.646176  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:30.943227  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:31.145733  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:31.146787  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:31.444715  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:31.647363  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:31.648356  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:31.943463  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:32.147110  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:32.147301  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:32.450280  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:32.646071  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:32.646536  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:32.944012  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:33.145553  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:33.146336  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:33.443823  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:33.645571  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:33.645806  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:33.943416  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:34.146025  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:34.146260  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:34.443807  295944 kapi.go:107] duration metric: took 1m22.003713397s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1102 13:15:34.448204  295944 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-230560 cluster.
	I1102 13:15:34.452926  295944 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1102 13:15:34.457152  295944 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1102 13:15:34.645506  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:34.647159  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:35.145947  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:35.146128  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:35.651074  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:35.651272  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:36.145952  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:36.146342  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:36.646161  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:36.646373  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:37.146191  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:37.146832  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:37.645747  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:37.646519  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:38.145517  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:38.147568  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:38.375823  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 13:15:38.646054  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:38.646538  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:39.146380  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:39.146691  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:39.397584  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.021725083s)
	W1102 13:15:39.397617  295944 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1102 13:15:39.397702  295944 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1102 13:15:39.644493  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:39.645761  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:40.146738  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:40.148233  295944 kapi.go:107] duration metric: took 1m31.005988888s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1102 13:15:40.645440  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:41.144167  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:41.644677  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:42.144491  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:42.644592  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:43.144231  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:43.644604  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:44.144188  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:44.645346  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:45.149758  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:45.644318  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:46.144003  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:46.654016  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:47.144798  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:47.643985  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:48.144815  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:48.645012  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:49.145120  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:49.645425  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:50.144747  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:50.645396  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:51.145003  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:51.645038  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:52.143987  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:52.645461  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:53.143713  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:53.643783  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:54.144138  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:54.644588  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:55.144402  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:55.644416  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:56.144313  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:56.644358  295944 kapi.go:107] duration metric: took 1m48.003555043s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1102 13:15:56.647880  295944 out.go:179] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, registry-creds, storage-provisioner, storage-provisioner-rancher, cloud-spanner, ingress-dns, yakd, metrics-server, default-storageclass, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I1102 13:15:56.650818  295944 addons.go:515] duration metric: took 1m54.401733059s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin registry-creds storage-provisioner storage-provisioner-rancher cloud-spanner ingress-dns yakd metrics-server default-storageclass volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I1102 13:15:56.650879  295944 start.go:247] waiting for cluster config update ...
	I1102 13:15:56.650903  295944 start.go:256] writing updated cluster config ...
	I1102 13:15:56.651203  295944 ssh_runner.go:195] Run: rm -f paused
	I1102 13:15:56.656614  295944 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:15:56.660004  295944 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6rft9" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:15:56.664943  295944 pod_ready.go:94] pod "coredns-66bc5c9577-6rft9" is "Ready"
	I1102 13:15:56.664974  295944 pod_ready.go:86] duration metric: took 4.942922ms for pod "coredns-66bc5c9577-6rft9" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:15:56.667138  295944 pod_ready.go:83] waiting for pod "etcd-addons-230560" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:15:56.671608  295944 pod_ready.go:94] pod "etcd-addons-230560" is "Ready"
	I1102 13:15:56.671640  295944 pod_ready.go:86] duration metric: took 4.478268ms for pod "etcd-addons-230560" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:15:56.674041  295944 pod_ready.go:83] waiting for pod "kube-apiserver-addons-230560" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:15:56.678301  295944 pod_ready.go:94] pod "kube-apiserver-addons-230560" is "Ready"
	I1102 13:15:56.678326  295944 pod_ready.go:86] duration metric: took 4.258221ms for pod "kube-apiserver-addons-230560" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:15:56.680789  295944 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-230560" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:15:57.060979  295944 pod_ready.go:94] pod "kube-controller-manager-addons-230560" is "Ready"
	I1102 13:15:57.061012  295944 pod_ready.go:86] duration metric: took 380.165348ms for pod "kube-controller-manager-addons-230560" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:15:57.261541  295944 pod_ready.go:83] waiting for pod "kube-proxy-dzts7" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:15:57.660885  295944 pod_ready.go:94] pod "kube-proxy-dzts7" is "Ready"
	I1102 13:15:57.660917  295944 pod_ready.go:86] duration metric: took 399.349291ms for pod "kube-proxy-dzts7" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:15:57.861074  295944 pod_ready.go:83] waiting for pod "kube-scheduler-addons-230560" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:15:58.260419  295944 pod_ready.go:94] pod "kube-scheduler-addons-230560" is "Ready"
	I1102 13:15:58.260489  295944 pod_ready.go:86] duration metric: took 399.388209ms for pod "kube-scheduler-addons-230560" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:15:58.260509  295944 pod_ready.go:40] duration metric: took 1.603865354s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:15:58.314774  295944 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1102 13:15:58.319879  295944 out.go:179] * Done! kubectl is now configured to use "addons-230560" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 02 13:18:56 addons-230560 crio[833]: time="2025-11-02T13:18:56.888397745Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=1b82935b-3305-490e-affb-d202ac41bc91 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:18:56 addons-230560 crio[833]: time="2025-11-02T13:18:56.895478198Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=7d104e5d-74af-4564-b27b-07d505c73edb name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:18:57 addons-230560 crio[833]: time="2025-11-02T13:18:57.024792299Z" level=info msg="Creating container: kube-system/registry-creds-764b6fb674-5ssmw/registry-creds" id=51199a6e-c5ba-4992-8b23-606413cd9c9a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:18:57 addons-230560 crio[833]: time="2025-11-02T13:18:57.02505791Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:18:57 addons-230560 crio[833]: time="2025-11-02T13:18:57.042470236Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:18:57 addons-230560 crio[833]: time="2025-11-02T13:18:57.043859454Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:18:57 addons-230560 crio[833]: time="2025-11-02T13:18:57.06695869Z" level=info msg="Created container 64591067ebbed4d30c76ae8b932294714d13f483da4cd6b6f1d2d24b25201a19: kube-system/registry-creds-764b6fb674-5ssmw/registry-creds" id=51199a6e-c5ba-4992-8b23-606413cd9c9a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:18:57 addons-230560 crio[833]: time="2025-11-02T13:18:57.06891768Z" level=info msg="Starting container: 64591067ebbed4d30c76ae8b932294714d13f483da4cd6b6f1d2d24b25201a19" id=93df1c81-703d-4f56-b93d-c19ea888b613 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 13:18:57 addons-230560 conmon[7245]: conmon 64591067ebbed4d30c76 <ninfo>: container 7247 exited with status 1
	Nov 02 13:18:57 addons-230560 crio[833]: time="2025-11-02T13:18:57.077678612Z" level=info msg="Started container" PID=7247 containerID=64591067ebbed4d30c76ae8b932294714d13f483da4cd6b6f1d2d24b25201a19 description=kube-system/registry-creds-764b6fb674-5ssmw/registry-creds id=93df1c81-703d-4f56-b93d-c19ea888b613 name=/runtime.v1.RuntimeService/StartContainer sandboxID=806d46e4a762bcd5cec77909df9a41eced425b89ce1fe238fef6ec02117851d0
	Nov 02 13:18:57 addons-230560 crio[833]: time="2025-11-02T13:18:57.122769839Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=655837e1-4751-42ae-a477-8c7ab14a429a name=/runtime.v1.ImageService/PullImage
	Nov 02 13:18:57 addons-230560 crio[833]: time="2025-11-02T13:18:57.123555358Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=aa2d5ddf-28e0-4b6f-a8de-44f8eef4aa82 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:18:57 addons-230560 crio[833]: time="2025-11-02T13:18:57.126149276Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=9001fda4-fe0f-4c7c-bdc1-ff0eb10bb9e1 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:18:57 addons-230560 crio[833]: time="2025-11-02T13:18:57.133868941Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-7925d/hello-world-app" id=2b37331f-d59d-456e-a31b-a259890cf3f8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:18:57 addons-230560 crio[833]: time="2025-11-02T13:18:57.134162639Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:18:57 addons-230560 crio[833]: time="2025-11-02T13:18:57.14117809Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:18:57 addons-230560 crio[833]: time="2025-11-02T13:18:57.141371758Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/3ab90929e8ac7052bb6a07f0d1b134bb701080f8f39d1f39b20526b941fb41e4/merged/etc/passwd: no such file or directory"
	Nov 02 13:18:57 addons-230560 crio[833]: time="2025-11-02T13:18:57.141395709Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3ab90929e8ac7052bb6a07f0d1b134bb701080f8f39d1f39b20526b941fb41e4/merged/etc/group: no such file or directory"
	Nov 02 13:18:57 addons-230560 crio[833]: time="2025-11-02T13:18:57.141676073Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:18:57 addons-230560 crio[833]: time="2025-11-02T13:18:57.184875043Z" level=info msg="Created container 663d6f0f1a51155bf2ae7c0fb7f2fb169f6fd8eceb2f94e36a7b559730eba22d: default/hello-world-app-5d498dc89-7925d/hello-world-app" id=2b37331f-d59d-456e-a31b-a259890cf3f8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:18:57 addons-230560 crio[833]: time="2025-11-02T13:18:57.185869566Z" level=info msg="Starting container: 663d6f0f1a51155bf2ae7c0fb7f2fb169f6fd8eceb2f94e36a7b559730eba22d" id=dc088e67-beb6-4711-bd87-f98f494cd269 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 13:18:57 addons-230560 crio[833]: time="2025-11-02T13:18:57.188759873Z" level=info msg="Started container" PID=7261 containerID=663d6f0f1a51155bf2ae7c0fb7f2fb169f6fd8eceb2f94e36a7b559730eba22d description=default/hello-world-app-5d498dc89-7925d/hello-world-app id=dc088e67-beb6-4711-bd87-f98f494cd269 name=/runtime.v1.RuntimeService/StartContainer sandboxID=555da22febdf427665b87f5e04868a06dd41169351536dadc04aefda3f7ad472
	Nov 02 13:18:57 addons-230560 crio[833]: time="2025-11-02T13:18:57.598163632Z" level=info msg="Removing container: 93372fd9f613e7b09b245975b1f389cdbb932f873b8ccaf9002728515d6ecd86" id=e245c6aa-cf3c-4175-8e02-c42aa61622ff name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 02 13:18:57 addons-230560 crio[833]: time="2025-11-02T13:18:57.610965047Z" level=info msg="Error loading conmon cgroup of container 93372fd9f613e7b09b245975b1f389cdbb932f873b8ccaf9002728515d6ecd86: cgroup deleted" id=e245c6aa-cf3c-4175-8e02-c42aa61622ff name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 02 13:18:57 addons-230560 crio[833]: time="2025-11-02T13:18:57.615724111Z" level=info msg="Removed container 93372fd9f613e7b09b245975b1f389cdbb932f873b8ccaf9002728515d6ecd86: kube-system/registry-creds-764b6fb674-5ssmw/registry-creds" id=e245c6aa-cf3c-4175-8e02-c42aa61622ff name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	663d6f0f1a511       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        1 second ago        Running             hello-world-app                          0                   555da22febdf4       hello-world-app-5d498dc89-7925d             default
	64591067ebbed       a2fd0654e5baeec8de2209bfade13a0034e942e708fd2bbfce69bb26a3c02e14                                                                             1 second ago        Exited              registry-creds                           1                   806d46e4a762b       registry-creds-764b6fb674-5ssmw             kube-system
	6245330cc4e3e       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90                                              2 minutes ago       Running             nginx                                    0                   2f79f0d4b617b       nginx                                       default
	4b53de0687394       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          2 minutes ago       Running             busybox                                  0                   90cb1aa0f09d9       busybox                                     default
	a7ffc634ec21a       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             3 minutes ago       Running             controller                               0                   d0cd0f6b41b44       ingress-nginx-controller-675c5ddd98-vlthl   ingress-nginx
	59d3d49e880a7       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago       Running             csi-snapshotter                          0                   f9437ce3a1988       csi-hostpathplugin-gnxtb                    kube-system
	495997c964878       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago       Running             csi-provisioner                          0                   f9437ce3a1988       csi-hostpathplugin-gnxtb                    kube-system
	92ee410347c1f       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago       Running             liveness-probe                           0                   f9437ce3a1988       csi-hostpathplugin-gnxtb                    kube-system
	0fc09c9a2e59c       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             3 minutes ago       Exited              patch                                    2                   fcc604782c6f5       ingress-nginx-admission-patch-qdswx         ingress-nginx
	14fe9bc0e4e3f       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago       Running             hostpath                                 0                   f9437ce3a1988       csi-hostpathplugin-gnxtb                    kube-system
	a232385e88d3d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago       Running             gcp-auth                                 0                   3d1cd92e75890       gcp-auth-78565c9fb4-t4725                   gcp-auth
	990b6d45c69f1       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            3 minutes ago       Running             gadget                                   0                   10b781175e4f6       gadget-dv9jw                                gadget
	849382b87b03a       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago       Running             node-driver-registrar                    0                   f9437ce3a1988       csi-hostpathplugin-gnxtb                    kube-system
	2c92ca6fd79f7       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   3 minutes ago       Exited              create                                   0                   17f2fc0ac69a4       ingress-nginx-admission-create-nh5wk        ingress-nginx
	de7641522a905       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago       Running             registry-proxy                           0                   6a896cce9b5c5       registry-proxy-gk6xb                        kube-system
	ce226e80e176f       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago       Running             csi-resizer                              0                   f90af80a65b61       csi-hostpath-resizer-0                      kube-system
	a0486cd1530aa       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               3 minutes ago       Running             cloud-spanner-emulator                   0                   e4fe152b29c45       cloud-spanner-emulator-86bd5cbb97-5rtv5     default
	43495555e2c69       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     3 minutes ago       Running             nvidia-device-plugin-ctr                 0                   29b8acc6b965d       nvidia-device-plugin-daemonset-qkqx4        kube-system
	23d26c5efd413       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago       Running             volume-snapshot-controller               0                   a62801280fa19       snapshot-controller-7d9fbc56b8-rrmp6        kube-system
	f4000d22ba555       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago       Running             csi-external-health-monitor-controller   0                   f9437ce3a1988       csi-hostpathplugin-gnxtb                    kube-system
	571d698a41a0b       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago       Running             csi-attacher                             0                   005cb3bb5e38f       csi-hostpath-attacher-0                     kube-system
	1d9d1f4432586       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago       Running             local-path-provisioner                   0                   3d73d04b7ba5d       local-path-provisioner-648f6765c9-cbq27     local-path-storage
	b05b32f995002       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago       Running             volume-snapshot-controller               0                   d09cf2b6c9539       snapshot-controller-7d9fbc56b8-v88xw        kube-system
	ece119ee391be       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago       Running             minikube-ingress-dns                     0                   5e3feebe82373       kube-ingress-dns-minikube                   kube-system
	7d130b18d8ef1       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        4 minutes ago       Running             metrics-server                           0                   bf6481a68caa8       metrics-server-85b7d694d7-npk5l             kube-system
	01cc86f91cc93       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           4 minutes ago       Running             registry                                 0                   3f6d1dc045b68       registry-6b586f9694-qlm8d                   kube-system
	2e9e9c9def04e       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              4 minutes ago       Running             yakd                                     0                   03ab2151ad8a1       yakd-dashboard-5ff678cb9-j4lzk              yakd-dashboard
	7c311915f4fbc       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago       Running             storage-provisioner                      0                   e9c1284e69898       storage-provisioner                         kube-system
	2d7e91ed3fc10       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago       Running             coredns                                  0                   3a1d5b80f2ad2       coredns-66bc5c9577-6rft9                    kube-system
	b8f72f36b8b68       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             4 minutes ago       Running             kube-proxy                               0                   5dcbfbf0bd68f       kube-proxy-dzts7                            kube-system
	7c3129e8902e2       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago       Running             kindnet-cni                              0                   53336ad664753       kindnet-5dpxs                               kube-system
	ba2b8cd401ace       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             5 minutes ago       Running             etcd                                     0                   d41f150d013d2       etcd-addons-230560                          kube-system
	ae6a81713fca4       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             5 minutes ago       Running             kube-apiserver                           0                   ac4d436091f61       kube-apiserver-addons-230560                kube-system
	e520da42d44ee       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             5 minutes ago       Running             kube-scheduler                           0                   0de86bf9d5e03       kube-scheduler-addons-230560                kube-system
	47bfba99e6f29       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             5 minutes ago       Running             kube-controller-manager                  0                   7d0e91007728a       kube-controller-manager-addons-230560       kube-system
	
	
	==> coredns [2d7e91ed3fc10a735909e92c3d70b5422345ba649e0f465bf27dbb923af7877c] <==
	[INFO] 10.244.0.12:53840 - 32280 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.004736002s
	[INFO] 10.244.0.12:53840 - 1617 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.00018109s
	[INFO] 10.244.0.12:53840 - 37667 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000416629s
	[INFO] 10.244.0.12:47877 - 38611 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00017551s
	[INFO] 10.244.0.12:47877 - 37216 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000118163s
	[INFO] 10.244.0.12:53193 - 56203 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000121519s
	[INFO] 10.244.0.12:53193 - 56400 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000156556s
	[INFO] 10.244.0.12:46803 - 44052 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000082659s
	[INFO] 10.244.0.12:46803 - 43855 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000179555s
	[INFO] 10.244.0.12:52409 - 5191 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001257573s
	[INFO] 10.244.0.12:52409 - 5011 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001502342s
	[INFO] 10.244.0.12:48444 - 58470 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000156662s
	[INFO] 10.244.0.12:48444 - 58306 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00014332s
	[INFO] 10.244.0.20:44299 - 14091 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000174984s
	[INFO] 10.244.0.20:46874 - 43093 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000154234s
	[INFO] 10.244.0.20:44232 - 5634 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000159854s
	[INFO] 10.244.0.20:51630 - 28040 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000112395s
	[INFO] 10.244.0.20:56375 - 8314 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000230796s
	[INFO] 10.244.0.20:57702 - 24696 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000371508s
	[INFO] 10.244.0.20:39104 - 13160 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002011788s
	[INFO] 10.244.0.20:36510 - 41422 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002167424s
	[INFO] 10.244.0.20:53921 - 23783 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000734482s
	[INFO] 10.244.0.20:44992 - 41293 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001514068s
	[INFO] 10.244.0.23:43055 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000188244s
	[INFO] 10.244.0.23:59952 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000249472s
	
	
	==> describe nodes <==
	Name:               addons-230560
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-230560
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a
	                    minikube.k8s.io/name=addons-230560
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_02T13_13_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-230560
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-230560"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 02 Nov 2025 13:13:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-230560
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 02 Nov 2025 13:18:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 02 Nov 2025 13:18:43 +0000   Sun, 02 Nov 2025 13:13:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 02 Nov 2025 13:18:43 +0000   Sun, 02 Nov 2025 13:13:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 02 Nov 2025 13:18:43 +0000   Sun, 02 Nov 2025 13:13:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 02 Nov 2025 13:18:43 +0000   Sun, 02 Nov 2025 13:14:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-230560
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                c6afd72e-c193-43eb-ae12-e791b22211d1
	  Boot ID:                    4c303cf4-c0b6-4c0d-aa1a-d7509feb15e7
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m59s
	  default                     cloud-spanner-emulator-86bd5cbb97-5rtv5      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  default                     hello-world-app-5d498dc89-7925d              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m23s
	  gadget                      gadget-dv9jw                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  gcp-auth                    gcp-auth-78565c9fb4-t4725                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-vlthl    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m50s
	  kube-system                 coredns-66bc5c9577-6rft9                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m56s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  kube-system                 csi-hostpathplugin-gnxtb                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 etcd-addons-230560                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         5m2s
	  kube-system                 kindnet-5dpxs                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m56s
	  kube-system                 kube-apiserver-addons-230560                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 kube-controller-manager-addons-230560        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m1s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 kube-proxy-dzts7                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  kube-system                 kube-scheduler-addons-230560                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m2s
	  kube-system                 metrics-server-85b7d694d7-npk5l              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m51s
	  kube-system                 nvidia-device-plugin-daemonset-qkqx4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 registry-6b586f9694-qlm8d                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 registry-creds-764b6fb674-5ssmw              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  kube-system                 registry-proxy-gk6xb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 snapshot-controller-7d9fbc56b8-rrmp6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 snapshot-controller-7d9fbc56b8-v88xw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m51s
	  local-path-storage          local-path-provisioner-648f6765c9-cbq27      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-j4lzk               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 4m53s                kube-proxy       
	  Normal   Starting                 5m9s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m9s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m9s (x8 over 5m9s)  kubelet          Node addons-230560 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m9s (x8 over 5m9s)  kubelet          Node addons-230560 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m9s (x8 over 5m9s)  kubelet          Node addons-230560 status is now: NodeHasSufficientPID
	  Normal   Starting                 5m1s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 5m1s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  5m1s                 kubelet          Node addons-230560 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m1s                 kubelet          Node addons-230560 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m1s                 kubelet          Node addons-230560 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m57s                node-controller  Node addons-230560 event: Registered Node addons-230560 in Controller
	  Normal   NodeReady                4m14s                kubelet          Node addons-230560 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 2 11:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015966] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510742] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034359] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.787410] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.238409] kauditd_printk_skb: 36 callbacks suppressed
	[Nov 2 13:12] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 2 13:13] overlayfs: idmapped layers are currently not supported
	[  +0.073328] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [ba2b8cd401ace9132335713de0f6619fc89d02ed1a60281902f918001c3a9bc6] <==
	{"level":"warn","ts":"2025-11-02T13:13:51.923237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:13:51.943057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:13:51.956695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:13:51.979424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:13:51.991279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:13:52.011130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:13:52.027886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:13:52.055807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:13:52.089339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:13:52.097367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:13:52.117127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:13:52.127048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:13:52.139191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:13:52.155616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:13:52.178667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:13:52.207749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:13:52.227352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:13:52.238281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:13:52.339961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:14:09.442896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:14:09.467319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:14:31.395866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:14:31.411267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:14:31.445340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:14:31.467260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43428","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [a232385e88d3dfaef5c28fdea88dc774a28d1ba0f9e3dbbe8e2c650b2c532943] <==
	2025/11/02 13:15:33 GCP Auth Webhook started!
	2025/11/02 13:15:59 Ready to marshal response ...
	2025/11/02 13:15:59 Ready to write response ...
	2025/11/02 13:15:59 Ready to marshal response ...
	2025/11/02 13:15:59 Ready to write response ...
	2025/11/02 13:15:59 Ready to marshal response ...
	2025/11/02 13:15:59 Ready to write response ...
	2025/11/02 13:16:19 Ready to marshal response ...
	2025/11/02 13:16:19 Ready to write response ...
	2025/11/02 13:16:29 Ready to marshal response ...
	2025/11/02 13:16:29 Ready to write response ...
	2025/11/02 13:16:35 Ready to marshal response ...
	2025/11/02 13:16:35 Ready to write response ...
	2025/11/02 13:16:56 Ready to marshal response ...
	2025/11/02 13:16:56 Ready to write response ...
	2025/11/02 13:17:18 Ready to marshal response ...
	2025/11/02 13:17:18 Ready to write response ...
	2025/11/02 13:17:18 Ready to marshal response ...
	2025/11/02 13:17:18 Ready to write response ...
	2025/11/02 13:17:26 Ready to marshal response ...
	2025/11/02 13:17:26 Ready to write response ...
	2025/11/02 13:18:56 Ready to marshal response ...
	2025/11/02 13:18:56 Ready to write response ...
	
	
	==> kernel <==
	 13:18:58 up  2:01,  0 user,  load average: 0.65, 2.24, 3.08
	Linux addons-230560 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7c3129e8902e2ba546ec94fea95b907a80a88b9f19819ccc547d8e7cd7ddae43] <==
	I1102 13:16:54.320799       1 main.go:301] handling current node
	I1102 13:17:04.312719       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:17:04.312761       1 main.go:301] handling current node
	I1102 13:17:14.313339       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:17:14.313450       1 main.go:301] handling current node
	I1102 13:17:24.313484       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:17:24.313523       1 main.go:301] handling current node
	I1102 13:17:34.314770       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:17:34.314804       1 main.go:301] handling current node
	I1102 13:17:44.313765       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:17:44.313798       1 main.go:301] handling current node
	I1102 13:17:54.321354       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:17:54.321392       1 main.go:301] handling current node
	I1102 13:18:04.322477       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:18:04.322577       1 main.go:301] handling current node
	I1102 13:18:14.312720       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:18:14.312824       1 main.go:301] handling current node
	I1102 13:18:24.321762       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:18:24.321795       1 main.go:301] handling current node
	I1102 13:18:34.313519       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:18:34.313559       1 main.go:301] handling current node
	I1102 13:18:44.312906       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:18:44.313025       1 main.go:301] handling current node
	I1102 13:18:54.321209       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:18:54.321244       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ae6a81713fca42870850a9a5e0a86e40858cbf49ccdf8f4b701bb7c58d5b250d] <==
	W1102 13:14:31.395864       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1102 13:14:31.410827       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1102 13:14:31.445345       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1102 13:14:31.460912       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1102 13:14:44.406933       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.194.249:443: connect: connection refused
	E1102 13:14:44.407064       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.194.249:443: connect: connection refused" logger="UnhandledError"
	W1102 13:14:44.407531       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.194.249:443: connect: connection refused
	E1102 13:14:44.407624       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.194.249:443: connect: connection refused" logger="UnhandledError"
	W1102 13:14:44.513139       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.194.249:443: connect: connection refused
	E1102 13:14:44.513184       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.194.249:443: connect: connection refused" logger="UnhandledError"
	E1102 13:14:54.841019       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.62.117:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.62.117:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.62.117:443: connect: connection refused" logger="UnhandledError"
	W1102 13:14:54.841387       1 handler_proxy.go:99] no RequestInfo found in the context
	E1102 13:14:54.841449       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1102 13:14:54.841995       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.62.117:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.62.117:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.62.117:443: connect: connection refused" logger="UnhandledError"
	E1102 13:14:54.847534       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.62.117:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.62.117:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.62.117:443: connect: connection refused" logger="UnhandledError"
	I1102 13:14:54.970224       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1102 13:16:07.644356       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:56454: use of closed network connection
	E1102 13:16:08.044761       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:56492: use of closed network connection
	I1102 13:16:34.803789       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1102 13:16:35.110329       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.109.35.248"}
	I1102 13:16:42.007756       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1102 13:18:56.287019       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.224.60"}
	
	
	==> kube-controller-manager [47bfba99e6f299e3b3448bc8864faaedc77b8f94e548ef086dc4f5981ae0360a] <==
	I1102 13:14:01.384560       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1102 13:14:01.396121       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1102 13:14:01.398518       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1102 13:14:01.398922       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1102 13:14:01.414690       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1102 13:14:01.414920       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1102 13:14:01.420514       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 13:14:01.420594       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1102 13:14:01.420603       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1102 13:14:01.426893       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 13:14:01.428085       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1102 13:14:01.428281       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1102 13:14:01.429588       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1102 13:14:01.429680       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1102 13:14:01.429692       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1102 13:14:01.429702       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	E1102 13:14:07.053541       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1102 13:14:31.387807       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1102 13:14:31.388133       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1102 13:14:31.388231       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1102 13:14:31.434526       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1102 13:14:31.438841       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1102 13:14:31.488841       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 13:14:31.539729       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 13:14:46.411335       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [b8f72f36b8b681e6188a6ae20fbb9399b5a1bba3a9e3fa05f0101b5f7bd14aac] <==
	I1102 13:14:04.381713       1 server_linux.go:53] "Using iptables proxy"
	I1102 13:14:04.468205       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1102 13:14:04.569047       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1102 13:14:04.569097       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1102 13:14:04.569181       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1102 13:14:04.621259       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 13:14:04.621314       1 server_linux.go:132] "Using iptables Proxier"
	I1102 13:14:04.639713       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1102 13:14:04.640046       1 server.go:527] "Version info" version="v1.34.1"
	I1102 13:14:04.640059       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 13:14:04.641607       1 config.go:200] "Starting service config controller"
	I1102 13:14:04.641617       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1102 13:14:04.641633       1 config.go:106] "Starting endpoint slice config controller"
	I1102 13:14:04.641637       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1102 13:14:04.641648       1 config.go:403] "Starting serviceCIDR config controller"
	I1102 13:14:04.641652       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1102 13:14:04.642234       1 config.go:309] "Starting node config controller"
	I1102 13:14:04.642241       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1102 13:14:04.642247       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1102 13:14:04.741767       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1102 13:14:04.741811       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1102 13:14:04.741843       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e520da42d44eee8e7e351ea85bd1e8a1fec19b3c33ded4f2a1188baef7b927e3] <==
	I1102 13:13:53.232881       1 serving.go:386] Generated self-signed cert in-memory
	I1102 13:13:56.617485       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1102 13:13:56.617586       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 13:13:56.622352       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1102 13:13:56.622462       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1102 13:13:56.622543       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 13:13:56.622581       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 13:13:56.622662       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1102 13:13:56.622694       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1102 13:13:56.622877       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1102 13:13:56.622950       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1102 13:13:56.724277       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1102 13:13:56.724348       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1102 13:13:56.724449       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 02 13:17:28 addons-230560 kubelet[1285]: I1102 13:17:28.803145    1285 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/0f6eaba8-6a4f-468c-bf97-39f607b4f475-gcp-creds\") on node \"addons-230560\" DevicePath \"\""
	Nov 02 13:17:28 addons-230560 kubelet[1285]: I1102 13:17:28.803167    1285 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dxzw\" (UniqueName: \"kubernetes.io/projected/0f6eaba8-6a4f-468c-bf97-39f607b4f475-kube-api-access-6dxzw\") on node \"addons-230560\" DevicePath \"\""
	Nov 02 13:17:29 addons-230560 kubelet[1285]: I1102 13:17:29.376987    1285 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f6eaba8-6a4f-468c-bf97-39f607b4f475" path="/var/lib/kubelet/pods/0f6eaba8-6a4f-468c-bf97-39f607b4f475/volumes"
	Nov 02 13:17:29 addons-230560 kubelet[1285]: I1102 13:17:29.555522    1285 scope.go:117] "RemoveContainer" containerID="1056f6585031f1a40d0578c9aaac2f1b6d3ec952c02e889ef6ae6989343dcbd7"
	Nov 02 13:17:43 addons-230560 kubelet[1285]: I1102 13:17:43.374107    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-gk6xb" secret="" err="secret \"gcp-auth\" not found"
	Nov 02 13:17:44 addons-230560 kubelet[1285]: I1102 13:17:44.374038    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-qkqx4" secret="" err="secret \"gcp-auth\" not found"
	Nov 02 13:17:57 addons-230560 kubelet[1285]: I1102 13:17:57.540311    1285 scope.go:117] "RemoveContainer" containerID="03fc40d260d2f7d35835f3add56caeddeb5c4900e77b9cb5b59bb9ff2c003b55"
	Nov 02 13:17:57 addons-230560 kubelet[1285]: I1102 13:17:57.550065    1285 scope.go:117] "RemoveContainer" containerID="e1d1df22d6c0835c6f393329f086979f2f2cf281366c01abc44f5faa31106b60"
	Nov 02 13:18:40 addons-230560 kubelet[1285]: I1102 13:18:40.374826    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-qlm8d" secret="" err="secret \"gcp-auth\" not found"
	Nov 02 13:18:54 addons-230560 kubelet[1285]: I1102 13:18:54.575327    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-5ssmw" secret="" err="secret \"gcp-auth\" not found"
	Nov 02 13:18:54 addons-230560 kubelet[1285]: W1102 13:18:54.610186    1285 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/6c103036ac5bf712bbe6e5c6b8c4fb3f5a69f6a2461bc077906d5c7d591f5293/crio-806d46e4a762bcd5cec77909df9a41eced425b89ce1fe238fef6ec02117851d0 WatchSource:0}: Error finding container 806d46e4a762bcd5cec77909df9a41eced425b89ce1fe238fef6ec02117851d0: Status 404 returned error can't find the container with id 806d46e4a762bcd5cec77909df9a41eced425b89ce1fe238fef6ec02117851d0
	Nov 02 13:18:56 addons-230560 kubelet[1285]: I1102 13:18:56.241837    1285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/3cfb9734-4b12-4c8b-9b0f-a16b32316377-gcp-creds\") pod \"hello-world-app-5d498dc89-7925d\" (UID: \"3cfb9734-4b12-4c8b-9b0f-a16b32316377\") " pod="default/hello-world-app-5d498dc89-7925d"
	Nov 02 13:18:56 addons-230560 kubelet[1285]: I1102 13:18:56.242415    1285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z489v\" (UniqueName: \"kubernetes.io/projected/3cfb9734-4b12-4c8b-9b0f-a16b32316377-kube-api-access-z489v\") pod \"hello-world-app-5d498dc89-7925d\" (UID: \"3cfb9734-4b12-4c8b-9b0f-a16b32316377\") " pod="default/hello-world-app-5d498dc89-7925d"
	Nov 02 13:18:56 addons-230560 kubelet[1285]: I1102 13:18:56.878561    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-5ssmw" secret="" err="secret \"gcp-auth\" not found"
	Nov 02 13:18:56 addons-230560 kubelet[1285]: I1102 13:18:56.878992    1285 scope.go:117] "RemoveContainer" containerID="93372fd9f613e7b09b245975b1f389cdbb932f873b8ccaf9002728515d6ecd86"
	Nov 02 13:18:57 addons-230560 kubelet[1285]: E1102 13:18:57.529583    1285 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/63803836982ddc662e4aa427703fcf01f7c1c867a78be3fb8735c5520030f613/diff" to get inode usage: stat /var/lib/containers/storage/overlay/63803836982ddc662e4aa427703fcf01f7c1c867a78be3fb8735c5520030f613/diff: no such file or directory, extraDiskErr: <nil>
	Nov 02 13:18:57 addons-230560 kubelet[1285]: E1102 13:18:57.530068    1285 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/20d1bb7bbb838044eeeed64da68a61ef7920de75f1636058c97aee67d93783e5/diff" to get inode usage: stat /var/lib/containers/storage/overlay/20d1bb7bbb838044eeeed64da68a61ef7920de75f1636058c97aee67d93783e5/diff: no such file or directory, extraDiskErr: <nil>
	Nov 02 13:18:57 addons-230560 kubelet[1285]: E1102 13:18:57.530179    1285 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/469c55c64e815b9788330cfd7e2c11780e1f974fc0abe91d1b8520f7fd9ff9d7/diff" to get inode usage: stat /var/lib/containers/storage/overlay/469c55c64e815b9788330cfd7e2c11780e1f974fc0abe91d1b8520f7fd9ff9d7/diff: no such file or directory, extraDiskErr: <nil>
	Nov 02 13:18:57 addons-230560 kubelet[1285]: E1102 13:18:57.532453    1285 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/63c9431c8110ac6360ffb2265050f4857ac5ed0ecd7eb433d189177379d1812c/diff" to get inode usage: stat /var/lib/containers/storage/overlay/63c9431c8110ac6360ffb2265050f4857ac5ed0ecd7eb433d189177379d1812c/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/local-path-storage_helper-pod-create-pvc-1b5bd828-581e-49b1-bd61-f61335a71fd0_137ab915-5d13-4c2d-b7ed-b4eeda4fbf65/helper-pod/0.log" to get inode usage: stat /var/log/pods/local-path-storage_helper-pod-create-pvc-1b5bd828-581e-49b1-bd61-f61335a71fd0_137ab915-5d13-4c2d-b7ed-b4eeda4fbf65/helper-pod/0.log: no such file or directory
	Nov 02 13:18:57 addons-230560 kubelet[1285]: E1102 13:18:57.534693    1285 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/ca1e5eec7c0bc7e0b0790ef14c285f99f76e94fa6823a40bbc3db528d22cf9f3/diff" to get inode usage: stat /var/lib/containers/storage/overlay/ca1e5eec7c0bc7e0b0790ef14c285f99f76e94fa6823a40bbc3db528d22cf9f3/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/default_test-local-path_314ba22a-0f74-4b5f-b747-bf2732731896/busybox/0.log" to get inode usage: stat /var/log/pods/default_test-local-path_314ba22a-0f74-4b5f-b747-bf2732731896/busybox/0.log: no such file or directory
	Nov 02 13:18:57 addons-230560 kubelet[1285]: E1102 13:18:57.534883    1285 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/9749798d3e1683ed4538bf8abb45912f1bc09d7fa65f6979a6a8f22b54d9775a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/9749798d3e1683ed4538bf8abb45912f1bc09d7fa65f6979a6a8f22b54d9775a/diff: no such file or directory, extraDiskErr: <nil>
	Nov 02 13:18:57 addons-230560 kubelet[1285]: I1102 13:18:57.596931    1285 scope.go:117] "RemoveContainer" containerID="93372fd9f613e7b09b245975b1f389cdbb932f873b8ccaf9002728515d6ecd86"
	Nov 02 13:18:57 addons-230560 kubelet[1285]: I1102 13:18:57.911442    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-creds-764b6fb674-5ssmw" secret="" err="secret \"gcp-auth\" not found"
	Nov 02 13:18:57 addons-230560 kubelet[1285]: I1102 13:18:57.911762    1285 scope.go:117] "RemoveContainer" containerID="64591067ebbed4d30c76ae8b932294714d13f483da4cd6b6f1d2d24b25201a19"
	Nov 02 13:18:57 addons-230560 kubelet[1285]: E1102 13:18:57.912057    1285 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-creds\" with CrashLoopBackOff: \"back-off 10s restarting failed container=registry-creds pod=registry-creds-764b6fb674-5ssmw_kube-system(ad1063de-4f14-47bb-a909-fea786b4406a)\"" pod="kube-system/registry-creds-764b6fb674-5ssmw" podUID="ad1063de-4f14-47bb-a909-fea786b4406a"
	
	
	==> storage-provisioner [7c311915f4fbc67516c8e9c0534f2b294964f9597b308ef3f1372ad8d0e1b2d5] <==
	W1102 13:18:33.117297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:18:35.120613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:18:35.127284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:18:37.130323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:18:37.136944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:18:39.140025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:18:39.144495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:18:41.147249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:18:41.151548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:18:43.154515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:18:43.159014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:18:45.163907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:18:45.169676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:18:47.173500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:18:47.177963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:18:49.180648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:18:49.187579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:18:51.191230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:18:51.195820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:18:53.199387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:18:53.203888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:18:55.207738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:18:55.213603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:18:57.217221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:18:57.230490       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-230560 -n addons-230560
helpers_test.go:269: (dbg) Run:  kubectl --context addons-230560 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-nh5wk ingress-nginx-admission-patch-qdswx
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-230560 describe pod ingress-nginx-admission-create-nh5wk ingress-nginx-admission-patch-qdswx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-230560 describe pod ingress-nginx-admission-create-nh5wk ingress-nginx-admission-patch-qdswx: exit status 1 (118.341434ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-nh5wk" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-qdswx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-230560 describe pod ingress-nginx-admission-create-nh5wk ingress-nginx-admission-patch-qdswx: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-230560 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-230560 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (295.38644ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 13:18:59.608206  305678 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:18:59.609115  305678 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:18:59.609168  305678 out.go:374] Setting ErrFile to fd 2...
	I1102 13:18:59.609188  305678 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:18:59.609497  305678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 13:18:59.609869  305678 mustload.go:66] Loading cluster: addons-230560
	I1102 13:18:59.610308  305678 config.go:182] Loaded profile config "addons-230560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:18:59.610348  305678 addons.go:607] checking whether the cluster is paused
	I1102 13:18:59.610495  305678 config.go:182] Loaded profile config "addons-230560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:18:59.610529  305678 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:18:59.611102  305678 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:18:59.633089  305678 ssh_runner.go:195] Run: systemctl --version
	I1102 13:18:59.633145  305678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:18:59.673307  305678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:18:59.781028  305678 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:18:59.781116  305678 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:18:59.811178  305678 cri.go:89] found id: "64591067ebbed4d30c76ae8b932294714d13f483da4cd6b6f1d2d24b25201a19"
	I1102 13:18:59.811200  305678 cri.go:89] found id: "59d3d49e880a7e09fb4d9be850df44733d6d5116185f5d62db1b5c126b574e0b"
	I1102 13:18:59.811205  305678 cri.go:89] found id: "495997c964878856129fa01c98380dbe27e0d3e3399d552d965363043c0ed285"
	I1102 13:18:59.811209  305678 cri.go:89] found id: "92ee410347c1fecfc99fa6c734d7ea23c7a537dc02964ee119f8cc717fcef3e2"
	I1102 13:18:59.811213  305678 cri.go:89] found id: "14fe9bc0e4e3fca54005005e2faed708854fa4e45837404cf9bd640d6b5e2de6"
	I1102 13:18:59.811216  305678 cri.go:89] found id: "849382b87b03aa7df7b3bd0d7677466f19027eeb542e35e25286f1e8249c940e"
	I1102 13:18:59.811219  305678 cri.go:89] found id: "de7641522a90557a5bf20f6e7fc608045762d4951eef39028dd344fa1ec0e246"
	I1102 13:18:59.811222  305678 cri.go:89] found id: "ce226e80e176fd107a1fd4e99d0423900d376d659984557fa242d51fe29175f6"
	I1102 13:18:59.811227  305678 cri.go:89] found id: "43495555e2c69ab9b146d21dd528f268dcc6b5277bef46a2cdd8aac98ed01981"
	I1102 13:18:59.811232  305678 cri.go:89] found id: "23d26c5efd413a919fa01dc11c652b236e497eb2943a1a1cfaf21109a227fdf8"
	I1102 13:18:59.811235  305678 cri.go:89] found id: "f4000d22ba555b95620554ea649b6b0e65ff2c8de55597628a09a4936558b721"
	I1102 13:18:59.811238  305678 cri.go:89] found id: "571d698a41a0bf933525b4655374feb95afed1edb2640617ab7511cce65f0776"
	I1102 13:18:59.811241  305678 cri.go:89] found id: "b05b32f995002607af838c0a5ffed270958eaf8c7f841b88122803f35d8d2015"
	I1102 13:18:59.811245  305678 cri.go:89] found id: "ece119ee391be38c1a4f223d48708f601e4910a7734c54cbe59f4c38812974b5"
	I1102 13:18:59.811248  305678 cri.go:89] found id: "7d130b18d8ef12edee3e0d7b593a71e0c4b5690b982edfbbf83860e1b5d40c73"
	I1102 13:18:59.811260  305678 cri.go:89] found id: "01cc86f91cc933f1117d93925d4304fd9b0729b04f70bdfda8a3027baef7c8e9"
	I1102 13:18:59.811271  305678 cri.go:89] found id: "7c311915f4fbc67516c8e9c0534f2b294964f9597b308ef3f1372ad8d0e1b2d5"
	I1102 13:18:59.811277  305678 cri.go:89] found id: "2d7e91ed3fc10a735909e92c3d70b5422345ba649e0f465bf27dbb923af7877c"
	I1102 13:18:59.811280  305678 cri.go:89] found id: "b8f72f36b8b681e6188a6ae20fbb9399b5a1bba3a9e3fa05f0101b5f7bd14aac"
	I1102 13:18:59.811283  305678 cri.go:89] found id: "7c3129e8902e2ba546ec94fea95b907a80a88b9f19819ccc547d8e7cd7ddae43"
	I1102 13:18:59.811288  305678 cri.go:89] found id: "ba2b8cd401ace9132335713de0f6619fc89d02ed1a60281902f918001c3a9bc6"
	I1102 13:18:59.811291  305678 cri.go:89] found id: "ae6a81713fca42870850a9a5e0a86e40858cbf49ccdf8f4b701bb7c58d5b250d"
	I1102 13:18:59.811294  305678 cri.go:89] found id: "e520da42d44eee8e7e351ea85bd1e8a1fec19b3c33ded4f2a1188baef7b927e3"
	I1102 13:18:59.811297  305678 cri.go:89] found id: "47bfba99e6f299e3b3448bc8864faaedc77b8f94e548ef086dc4f5981ae0360a"
	I1102 13:18:59.811300  305678 cri.go:89] found id: ""
	I1102 13:18:59.811352  305678 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 13:18:59.826237  305678 out.go:203] 
	W1102 13:18:59.829220  305678 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:18:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:18:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 13:18:59.829285  305678 out.go:285] * 
	* 
	W1102 13:18:59.835708  305678 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 13:18:59.838828  305678 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-230560 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-230560 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-230560 addons disable ingress --alsologtostderr -v=1: exit status 11 (423.123762ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 13:18:59.894547  305787 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:18:59.895309  305787 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:18:59.895325  305787 out.go:374] Setting ErrFile to fd 2...
	I1102 13:18:59.895332  305787 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:18:59.895638  305787 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 13:18:59.895965  305787 mustload.go:66] Loading cluster: addons-230560
	I1102 13:18:59.896438  305787 config.go:182] Loaded profile config "addons-230560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:18:59.896458  305787 addons.go:607] checking whether the cluster is paused
	I1102 13:18:59.896598  305787 config.go:182] Loaded profile config "addons-230560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:18:59.896615  305787 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:18:59.897127  305787 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:18:59.916136  305787 ssh_runner.go:195] Run: systemctl --version
	I1102 13:18:59.916194  305787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:18:59.937149  305787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:19:00.064649  305787 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:19:00.064768  305787 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:19:00.142237  305787 cri.go:89] found id: "64591067ebbed4d30c76ae8b932294714d13f483da4cd6b6f1d2d24b25201a19"
	I1102 13:19:00.142271  305787 cri.go:89] found id: "59d3d49e880a7e09fb4d9be850df44733d6d5116185f5d62db1b5c126b574e0b"
	I1102 13:19:00.142277  305787 cri.go:89] found id: "495997c964878856129fa01c98380dbe27e0d3e3399d552d965363043c0ed285"
	I1102 13:19:00.142282  305787 cri.go:89] found id: "92ee410347c1fecfc99fa6c734d7ea23c7a537dc02964ee119f8cc717fcef3e2"
	I1102 13:19:00.142286  305787 cri.go:89] found id: "14fe9bc0e4e3fca54005005e2faed708854fa4e45837404cf9bd640d6b5e2de6"
	I1102 13:19:00.142289  305787 cri.go:89] found id: "849382b87b03aa7df7b3bd0d7677466f19027eeb542e35e25286f1e8249c940e"
	I1102 13:19:00.142293  305787 cri.go:89] found id: "de7641522a90557a5bf20f6e7fc608045762d4951eef39028dd344fa1ec0e246"
	I1102 13:19:00.142296  305787 cri.go:89] found id: "ce226e80e176fd107a1fd4e99d0423900d376d659984557fa242d51fe29175f6"
	I1102 13:19:00.142299  305787 cri.go:89] found id: "43495555e2c69ab9b146d21dd528f268dcc6b5277bef46a2cdd8aac98ed01981"
	I1102 13:19:00.142306  305787 cri.go:89] found id: "23d26c5efd413a919fa01dc11c652b236e497eb2943a1a1cfaf21109a227fdf8"
	I1102 13:19:00.142309  305787 cri.go:89] found id: "f4000d22ba555b95620554ea649b6b0e65ff2c8de55597628a09a4936558b721"
	I1102 13:19:00.142313  305787 cri.go:89] found id: "571d698a41a0bf933525b4655374feb95afed1edb2640617ab7511cce65f0776"
	I1102 13:19:00.142316  305787 cri.go:89] found id: "b05b32f995002607af838c0a5ffed270958eaf8c7f841b88122803f35d8d2015"
	I1102 13:19:00.142320  305787 cri.go:89] found id: "ece119ee391be38c1a4f223d48708f601e4910a7734c54cbe59f4c38812974b5"
	I1102 13:19:00.142328  305787 cri.go:89] found id: "7d130b18d8ef12edee3e0d7b593a71e0c4b5690b982edfbbf83860e1b5d40c73"
	I1102 13:19:00.142334  305787 cri.go:89] found id: "01cc86f91cc933f1117d93925d4304fd9b0729b04f70bdfda8a3027baef7c8e9"
	I1102 13:19:00.142338  305787 cri.go:89] found id: "7c311915f4fbc67516c8e9c0534f2b294964f9597b308ef3f1372ad8d0e1b2d5"
	I1102 13:19:00.142342  305787 cri.go:89] found id: "2d7e91ed3fc10a735909e92c3d70b5422345ba649e0f465bf27dbb923af7877c"
	I1102 13:19:00.142345  305787 cri.go:89] found id: "b8f72f36b8b681e6188a6ae20fbb9399b5a1bba3a9e3fa05f0101b5f7bd14aac"
	I1102 13:19:00.142348  305787 cri.go:89] found id: "7c3129e8902e2ba546ec94fea95b907a80a88b9f19819ccc547d8e7cd7ddae43"
	I1102 13:19:00.142354  305787 cri.go:89] found id: "ba2b8cd401ace9132335713de0f6619fc89d02ed1a60281902f918001c3a9bc6"
	I1102 13:19:00.142361  305787 cri.go:89] found id: "ae6a81713fca42870850a9a5e0a86e40858cbf49ccdf8f4b701bb7c58d5b250d"
	I1102 13:19:00.142364  305787 cri.go:89] found id: "e520da42d44eee8e7e351ea85bd1e8a1fec19b3c33ded4f2a1188baef7b927e3"
	I1102 13:19:00.142368  305787 cri.go:89] found id: "47bfba99e6f299e3b3448bc8864faaedc77b8f94e548ef086dc4f5981ae0360a"
	I1102 13:19:00.142371  305787 cri.go:89] found id: ""
	I1102 13:19:00.142432  305787 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 13:19:00.196001  305787 out.go:203] 
	W1102 13:19:00.211638  305787 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:19:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:19:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 13:19:00.211669  305787 out.go:285] * 
	* 
	W1102 13:19:00.257882  305787 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 13:19:00.261017  305787 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-230560 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (145.82s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.33s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-dv9jw" [11a5f34a-0394-4115-a814-96653784d9d2] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004173833s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-230560 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-230560 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (319.86869ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 13:16:34.206187  303181 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:16:34.207486  303181 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:16:34.207553  303181 out.go:374] Setting ErrFile to fd 2...
	I1102 13:16:34.207575  303181 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:16:34.208009  303181 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 13:16:34.208395  303181 mustload.go:66] Loading cluster: addons-230560
	I1102 13:16:34.209677  303181 config.go:182] Loaded profile config "addons-230560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:16:34.209707  303181 addons.go:607] checking whether the cluster is paused
	I1102 13:16:34.209866  303181 config.go:182] Loaded profile config "addons-230560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:16:34.209884  303181 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:16:34.210397  303181 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:16:34.233397  303181 ssh_runner.go:195] Run: systemctl --version
	I1102 13:16:34.233452  303181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:16:34.253382  303181 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:16:34.365902  303181 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:16:34.365998  303181 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:16:34.411964  303181 cri.go:89] found id: "59d3d49e880a7e09fb4d9be850df44733d6d5116185f5d62db1b5c126b574e0b"
	I1102 13:16:34.412040  303181 cri.go:89] found id: "495997c964878856129fa01c98380dbe27e0d3e3399d552d965363043c0ed285"
	I1102 13:16:34.412049  303181 cri.go:89] found id: "92ee410347c1fecfc99fa6c734d7ea23c7a537dc02964ee119f8cc717fcef3e2"
	I1102 13:16:34.412053  303181 cri.go:89] found id: "14fe9bc0e4e3fca54005005e2faed708854fa4e45837404cf9bd640d6b5e2de6"
	I1102 13:16:34.412057  303181 cri.go:89] found id: "849382b87b03aa7df7b3bd0d7677466f19027eeb542e35e25286f1e8249c940e"
	I1102 13:16:34.412061  303181 cri.go:89] found id: "de7641522a90557a5bf20f6e7fc608045762d4951eef39028dd344fa1ec0e246"
	I1102 13:16:34.412065  303181 cri.go:89] found id: "ce226e80e176fd107a1fd4e99d0423900d376d659984557fa242d51fe29175f6"
	I1102 13:16:34.412068  303181 cri.go:89] found id: "43495555e2c69ab9b146d21dd528f268dcc6b5277bef46a2cdd8aac98ed01981"
	I1102 13:16:34.412072  303181 cri.go:89] found id: "23d26c5efd413a919fa01dc11c652b236e497eb2943a1a1cfaf21109a227fdf8"
	I1102 13:16:34.412078  303181 cri.go:89] found id: "f4000d22ba555b95620554ea649b6b0e65ff2c8de55597628a09a4936558b721"
	I1102 13:16:34.412086  303181 cri.go:89] found id: "571d698a41a0bf933525b4655374feb95afed1edb2640617ab7511cce65f0776"
	I1102 13:16:34.412090  303181 cri.go:89] found id: "b05b32f995002607af838c0a5ffed270958eaf8c7f841b88122803f35d8d2015"
	I1102 13:16:34.412118  303181 cri.go:89] found id: "ece119ee391be38c1a4f223d48708f601e4910a7734c54cbe59f4c38812974b5"
	I1102 13:16:34.412136  303181 cri.go:89] found id: "7d130b18d8ef12edee3e0d7b593a71e0c4b5690b982edfbbf83860e1b5d40c73"
	I1102 13:16:34.412146  303181 cri.go:89] found id: "01cc86f91cc933f1117d93925d4304fd9b0729b04f70bdfda8a3027baef7c8e9"
	I1102 13:16:34.412152  303181 cri.go:89] found id: "7c311915f4fbc67516c8e9c0534f2b294964f9597b308ef3f1372ad8d0e1b2d5"
	I1102 13:16:34.412155  303181 cri.go:89] found id: "2d7e91ed3fc10a735909e92c3d70b5422345ba649e0f465bf27dbb923af7877c"
	I1102 13:16:34.412160  303181 cri.go:89] found id: "b8f72f36b8b681e6188a6ae20fbb9399b5a1bba3a9e3fa05f0101b5f7bd14aac"
	I1102 13:16:34.412164  303181 cri.go:89] found id: "7c3129e8902e2ba546ec94fea95b907a80a88b9f19819ccc547d8e7cd7ddae43"
	I1102 13:16:34.412167  303181 cri.go:89] found id: "ba2b8cd401ace9132335713de0f6619fc89d02ed1a60281902f918001c3a9bc6"
	I1102 13:16:34.412172  303181 cri.go:89] found id: "ae6a81713fca42870850a9a5e0a86e40858cbf49ccdf8f4b701bb7c58d5b250d"
	I1102 13:16:34.412175  303181 cri.go:89] found id: "e520da42d44eee8e7e351ea85bd1e8a1fec19b3c33ded4f2a1188baef7b927e3"
	I1102 13:16:34.412177  303181 cri.go:89] found id: "47bfba99e6f299e3b3448bc8864faaedc77b8f94e548ef086dc4f5981ae0360a"
	I1102 13:16:34.412180  303181 cri.go:89] found id: ""
	I1102 13:16:34.412240  303181 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 13:16:34.431226  303181 out.go:203] 
	W1102 13:16:34.434222  303181 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:16:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:16:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 13:16:34.434246  303181 out.go:285] * 
	* 
	W1102 13:16:34.441134  303181 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 13:16:34.444632  303181 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-230560 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.33s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.37s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.145586ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-npk5l" [828b803a-a751-44ee-9dfe-b0ffbba104f3] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00443526s
addons_test.go:463: (dbg) Run:  kubectl --context addons-230560 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-230560 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-230560 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (270.05447ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 13:16:28.909151  303022 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:16:28.909932  303022 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:16:28.909947  303022 out.go:374] Setting ErrFile to fd 2...
	I1102 13:16:28.909952  303022 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:16:28.910241  303022 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 13:16:28.910584  303022 mustload.go:66] Loading cluster: addons-230560
	I1102 13:16:28.911077  303022 config.go:182] Loaded profile config "addons-230560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:16:28.911100  303022 addons.go:607] checking whether the cluster is paused
	I1102 13:16:28.911251  303022 config.go:182] Loaded profile config "addons-230560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:16:28.911269  303022 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:16:28.911751  303022 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:16:28.934863  303022 ssh_runner.go:195] Run: systemctl --version
	I1102 13:16:28.934917  303022 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:16:28.954266  303022 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:16:29.061334  303022 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:16:29.061437  303022 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:16:29.091636  303022 cri.go:89] found id: "59d3d49e880a7e09fb4d9be850df44733d6d5116185f5d62db1b5c126b574e0b"
	I1102 13:16:29.091660  303022 cri.go:89] found id: "495997c964878856129fa01c98380dbe27e0d3e3399d552d965363043c0ed285"
	I1102 13:16:29.091665  303022 cri.go:89] found id: "92ee410347c1fecfc99fa6c734d7ea23c7a537dc02964ee119f8cc717fcef3e2"
	I1102 13:16:29.091670  303022 cri.go:89] found id: "14fe9bc0e4e3fca54005005e2faed708854fa4e45837404cf9bd640d6b5e2de6"
	I1102 13:16:29.091673  303022 cri.go:89] found id: "849382b87b03aa7df7b3bd0d7677466f19027eeb542e35e25286f1e8249c940e"
	I1102 13:16:29.091677  303022 cri.go:89] found id: "de7641522a90557a5bf20f6e7fc608045762d4951eef39028dd344fa1ec0e246"
	I1102 13:16:29.091680  303022 cri.go:89] found id: "ce226e80e176fd107a1fd4e99d0423900d376d659984557fa242d51fe29175f6"
	I1102 13:16:29.091683  303022 cri.go:89] found id: "43495555e2c69ab9b146d21dd528f268dcc6b5277bef46a2cdd8aac98ed01981"
	I1102 13:16:29.091686  303022 cri.go:89] found id: "23d26c5efd413a919fa01dc11c652b236e497eb2943a1a1cfaf21109a227fdf8"
	I1102 13:16:29.091692  303022 cri.go:89] found id: "f4000d22ba555b95620554ea649b6b0e65ff2c8de55597628a09a4936558b721"
	I1102 13:16:29.091695  303022 cri.go:89] found id: "571d698a41a0bf933525b4655374feb95afed1edb2640617ab7511cce65f0776"
	I1102 13:16:29.091698  303022 cri.go:89] found id: "b05b32f995002607af838c0a5ffed270958eaf8c7f841b88122803f35d8d2015"
	I1102 13:16:29.091701  303022 cri.go:89] found id: "ece119ee391be38c1a4f223d48708f601e4910a7734c54cbe59f4c38812974b5"
	I1102 13:16:29.091705  303022 cri.go:89] found id: "7d130b18d8ef12edee3e0d7b593a71e0c4b5690b982edfbbf83860e1b5d40c73"
	I1102 13:16:29.091709  303022 cri.go:89] found id: "01cc86f91cc933f1117d93925d4304fd9b0729b04f70bdfda8a3027baef7c8e9"
	I1102 13:16:29.091720  303022 cri.go:89] found id: "7c311915f4fbc67516c8e9c0534f2b294964f9597b308ef3f1372ad8d0e1b2d5"
	I1102 13:16:29.091723  303022 cri.go:89] found id: "2d7e91ed3fc10a735909e92c3d70b5422345ba649e0f465bf27dbb923af7877c"
	I1102 13:16:29.091731  303022 cri.go:89] found id: "b8f72f36b8b681e6188a6ae20fbb9399b5a1bba3a9e3fa05f0101b5f7bd14aac"
	I1102 13:16:29.091734  303022 cri.go:89] found id: "7c3129e8902e2ba546ec94fea95b907a80a88b9f19819ccc547d8e7cd7ddae43"
	I1102 13:16:29.091737  303022 cri.go:89] found id: "ba2b8cd401ace9132335713de0f6619fc89d02ed1a60281902f918001c3a9bc6"
	I1102 13:16:29.091741  303022 cri.go:89] found id: "ae6a81713fca42870850a9a5e0a86e40858cbf49ccdf8f4b701bb7c58d5b250d"
	I1102 13:16:29.091745  303022 cri.go:89] found id: "e520da42d44eee8e7e351ea85bd1e8a1fec19b3c33ded4f2a1188baef7b927e3"
	I1102 13:16:29.091748  303022 cri.go:89] found id: "47bfba99e6f299e3b3448bc8864faaedc77b8f94e548ef086dc4f5981ae0360a"
	I1102 13:16:29.091751  303022 cri.go:89] found id: ""
	I1102 13:16:29.091802  303022 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 13:16:29.107163  303022 out.go:203] 
	W1102 13:16:29.110158  303022 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:16:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:16:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 13:16:29.110178  303022 out.go:285] * 
	* 
	W1102 13:16:29.116524  303022 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 13:16:29.119474  303022 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-230560 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.37s)

                                                
                                    
x
+
TestAddons/parallel/CSI (54.95s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1102 13:16:11.491736  295174 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1102 13:16:11.496186  295174 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1102 13:16:11.496211  295174 kapi.go:107] duration metric: took 4.491094ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 4.502212ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-230560 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-230560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-230560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-230560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-230560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-230560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-230560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-230560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-230560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-230560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-230560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-230560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-230560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-230560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-230560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-230560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-230560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-230560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-230560 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-230560 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-230560 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [be85cf3c-6410-41ae-bf0d-b535ff54541b] Pending
helpers_test.go:352: "task-pv-pod" [be85cf3c-6410-41ae-bf0d-b535ff54541b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [be85cf3c-6410-41ae-bf0d-b535ff54541b] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.004824246s
addons_test.go:572: (dbg) Run:  kubectl --context addons-230560 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-230560 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:435: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:427: (dbg) Run:  kubectl --context addons-230560 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-230560 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-230560 delete pod task-pv-pod: (1.240943147s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-230560 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-230560 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-230560 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-230560 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-230560 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-230560 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-230560 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-230560 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-230560 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-230560 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-230560 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-230560 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-230560 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-230560 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-230560 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-230560 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [9d93915a-892b-4c7a-9d86-6f9b6354485d] Pending
helpers_test.go:352: "task-pv-pod-restore" [9d93915a-892b-4c7a-9d86-6f9b6354485d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [9d93915a-892b-4c7a-9d86-6f9b6354485d] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004244971s
addons_test.go:614: (dbg) Run:  kubectl --context addons-230560 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-230560 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-230560 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-230560 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-230560 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (277.455252ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 13:17:05.947287  304018 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:17:05.948123  304018 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:17:05.948170  304018 out.go:374] Setting ErrFile to fd 2...
	I1102 13:17:05.948195  304018 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:17:05.948500  304018 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 13:17:05.948890  304018 mustload.go:66] Loading cluster: addons-230560
	I1102 13:17:05.949318  304018 config.go:182] Loaded profile config "addons-230560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:17:05.949366  304018 addons.go:607] checking whether the cluster is paused
	I1102 13:17:05.949496  304018 config.go:182] Loaded profile config "addons-230560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:17:05.949533  304018 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:17:05.950039  304018 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:17:05.968457  304018 ssh_runner.go:195] Run: systemctl --version
	I1102 13:17:05.968518  304018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:17:05.986251  304018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:17:06.105707  304018 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:17:06.105805  304018 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:17:06.138005  304018 cri.go:89] found id: "59d3d49e880a7e09fb4d9be850df44733d6d5116185f5d62db1b5c126b574e0b"
	I1102 13:17:06.138030  304018 cri.go:89] found id: "495997c964878856129fa01c98380dbe27e0d3e3399d552d965363043c0ed285"
	I1102 13:17:06.138035  304018 cri.go:89] found id: "92ee410347c1fecfc99fa6c734d7ea23c7a537dc02964ee119f8cc717fcef3e2"
	I1102 13:17:06.138042  304018 cri.go:89] found id: "14fe9bc0e4e3fca54005005e2faed708854fa4e45837404cf9bd640d6b5e2de6"
	I1102 13:17:06.138046  304018 cri.go:89] found id: "849382b87b03aa7df7b3bd0d7677466f19027eeb542e35e25286f1e8249c940e"
	I1102 13:17:06.138051  304018 cri.go:89] found id: "de7641522a90557a5bf20f6e7fc608045762d4951eef39028dd344fa1ec0e246"
	I1102 13:17:06.138054  304018 cri.go:89] found id: "ce226e80e176fd107a1fd4e99d0423900d376d659984557fa242d51fe29175f6"
	I1102 13:17:06.138057  304018 cri.go:89] found id: "43495555e2c69ab9b146d21dd528f268dcc6b5277bef46a2cdd8aac98ed01981"
	I1102 13:17:06.138061  304018 cri.go:89] found id: "23d26c5efd413a919fa01dc11c652b236e497eb2943a1a1cfaf21109a227fdf8"
	I1102 13:17:06.138067  304018 cri.go:89] found id: "f4000d22ba555b95620554ea649b6b0e65ff2c8de55597628a09a4936558b721"
	I1102 13:17:06.138070  304018 cri.go:89] found id: "571d698a41a0bf933525b4655374feb95afed1edb2640617ab7511cce65f0776"
	I1102 13:17:06.138074  304018 cri.go:89] found id: "b05b32f995002607af838c0a5ffed270958eaf8c7f841b88122803f35d8d2015"
	I1102 13:17:06.138077  304018 cri.go:89] found id: "ece119ee391be38c1a4f223d48708f601e4910a7734c54cbe59f4c38812974b5"
	I1102 13:17:06.138080  304018 cri.go:89] found id: "7d130b18d8ef12edee3e0d7b593a71e0c4b5690b982edfbbf83860e1b5d40c73"
	I1102 13:17:06.138084  304018 cri.go:89] found id: "01cc86f91cc933f1117d93925d4304fd9b0729b04f70bdfda8a3027baef7c8e9"
	I1102 13:17:06.138091  304018 cri.go:89] found id: "7c311915f4fbc67516c8e9c0534f2b294964f9597b308ef3f1372ad8d0e1b2d5"
	I1102 13:17:06.138095  304018 cri.go:89] found id: "2d7e91ed3fc10a735909e92c3d70b5422345ba649e0f465bf27dbb923af7877c"
	I1102 13:17:06.138099  304018 cri.go:89] found id: "b8f72f36b8b681e6188a6ae20fbb9399b5a1bba3a9e3fa05f0101b5f7bd14aac"
	I1102 13:17:06.138103  304018 cri.go:89] found id: "7c3129e8902e2ba546ec94fea95b907a80a88b9f19819ccc547d8e7cd7ddae43"
	I1102 13:17:06.138106  304018 cri.go:89] found id: "ba2b8cd401ace9132335713de0f6619fc89d02ed1a60281902f918001c3a9bc6"
	I1102 13:17:06.138109  304018 cri.go:89] found id: "ae6a81713fca42870850a9a5e0a86e40858cbf49ccdf8f4b701bb7c58d5b250d"
	I1102 13:17:06.138112  304018 cri.go:89] found id: "e520da42d44eee8e7e351ea85bd1e8a1fec19b3c33ded4f2a1188baef7b927e3"
	I1102 13:17:06.138116  304018 cri.go:89] found id: "47bfba99e6f299e3b3448bc8864faaedc77b8f94e548ef086dc4f5981ae0360a"
	I1102 13:17:06.138118  304018 cri.go:89] found id: ""
	I1102 13:17:06.138171  304018 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 13:17:06.154319  304018 out.go:203] 
	W1102 13:17:06.157208  304018 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:17:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:17:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 13:17:06.157240  304018 out.go:285] * 
	* 
	W1102 13:17:06.163613  304018 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 13:17:06.166487  304018 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-230560 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-230560 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-230560 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (266.297572ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 13:17:06.217107  304062 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:17:06.217857  304062 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:17:06.217904  304062 out.go:374] Setting ErrFile to fd 2...
	I1102 13:17:06.217928  304062 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:17:06.218221  304062 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 13:17:06.218603  304062 mustload.go:66] Loading cluster: addons-230560
	I1102 13:17:06.219065  304062 config.go:182] Loaded profile config "addons-230560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:17:06.219146  304062 addons.go:607] checking whether the cluster is paused
	I1102 13:17:06.219281  304062 config.go:182] Loaded profile config "addons-230560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:17:06.219318  304062 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:17:06.219795  304062 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:17:06.237464  304062 ssh_runner.go:195] Run: systemctl --version
	I1102 13:17:06.237526  304062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:17:06.256900  304062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:17:06.361724  304062 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:17:06.361808  304062 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:17:06.401998  304062 cri.go:89] found id: "59d3d49e880a7e09fb4d9be850df44733d6d5116185f5d62db1b5c126b574e0b"
	I1102 13:17:06.402019  304062 cri.go:89] found id: "495997c964878856129fa01c98380dbe27e0d3e3399d552d965363043c0ed285"
	I1102 13:17:06.402024  304062 cri.go:89] found id: "92ee410347c1fecfc99fa6c734d7ea23c7a537dc02964ee119f8cc717fcef3e2"
	I1102 13:17:06.402028  304062 cri.go:89] found id: "14fe9bc0e4e3fca54005005e2faed708854fa4e45837404cf9bd640d6b5e2de6"
	I1102 13:17:06.402031  304062 cri.go:89] found id: "849382b87b03aa7df7b3bd0d7677466f19027eeb542e35e25286f1e8249c940e"
	I1102 13:17:06.402035  304062 cri.go:89] found id: "de7641522a90557a5bf20f6e7fc608045762d4951eef39028dd344fa1ec0e246"
	I1102 13:17:06.402038  304062 cri.go:89] found id: "ce226e80e176fd107a1fd4e99d0423900d376d659984557fa242d51fe29175f6"
	I1102 13:17:06.402041  304062 cri.go:89] found id: "43495555e2c69ab9b146d21dd528f268dcc6b5277bef46a2cdd8aac98ed01981"
	I1102 13:17:06.402044  304062 cri.go:89] found id: "23d26c5efd413a919fa01dc11c652b236e497eb2943a1a1cfaf21109a227fdf8"
	I1102 13:17:06.402054  304062 cri.go:89] found id: "f4000d22ba555b95620554ea649b6b0e65ff2c8de55597628a09a4936558b721"
	I1102 13:17:06.402058  304062 cri.go:89] found id: "571d698a41a0bf933525b4655374feb95afed1edb2640617ab7511cce65f0776"
	I1102 13:17:06.402061  304062 cri.go:89] found id: "b05b32f995002607af838c0a5ffed270958eaf8c7f841b88122803f35d8d2015"
	I1102 13:17:06.402065  304062 cri.go:89] found id: "ece119ee391be38c1a4f223d48708f601e4910a7734c54cbe59f4c38812974b5"
	I1102 13:17:06.402068  304062 cri.go:89] found id: "7d130b18d8ef12edee3e0d7b593a71e0c4b5690b982edfbbf83860e1b5d40c73"
	I1102 13:17:06.402072  304062 cri.go:89] found id: "01cc86f91cc933f1117d93925d4304fd9b0729b04f70bdfda8a3027baef7c8e9"
	I1102 13:17:06.402077  304062 cri.go:89] found id: "7c311915f4fbc67516c8e9c0534f2b294964f9597b308ef3f1372ad8d0e1b2d5"
	I1102 13:17:06.402086  304062 cri.go:89] found id: "2d7e91ed3fc10a735909e92c3d70b5422345ba649e0f465bf27dbb923af7877c"
	I1102 13:17:06.402091  304062 cri.go:89] found id: "b8f72f36b8b681e6188a6ae20fbb9399b5a1bba3a9e3fa05f0101b5f7bd14aac"
	I1102 13:17:06.402094  304062 cri.go:89] found id: "7c3129e8902e2ba546ec94fea95b907a80a88b9f19819ccc547d8e7cd7ddae43"
	I1102 13:17:06.402098  304062 cri.go:89] found id: "ba2b8cd401ace9132335713de0f6619fc89d02ed1a60281902f918001c3a9bc6"
	I1102 13:17:06.402103  304062 cri.go:89] found id: "ae6a81713fca42870850a9a5e0a86e40858cbf49ccdf8f4b701bb7c58d5b250d"
	I1102 13:17:06.402106  304062 cri.go:89] found id: "e520da42d44eee8e7e351ea85bd1e8a1fec19b3c33ded4f2a1188baef7b927e3"
	I1102 13:17:06.402109  304062 cri.go:89] found id: "47bfba99e6f299e3b3448bc8864faaedc77b8f94e548ef086dc4f5981ae0360a"
	I1102 13:17:06.402112  304062 cri.go:89] found id: ""
	I1102 13:17:06.402165  304062 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 13:17:06.419886  304062 out.go:203] 
	W1102 13:17:06.422844  304062 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:17:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:17:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 13:17:06.422882  304062 out.go:285] * 
	* 
	W1102 13:17:06.429484  304062 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 13:17:06.432877  304062 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-230560 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (54.95s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-230560 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-230560 --alsologtostderr -v=1: exit status 11 (274.948641ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 13:16:08.380462  302211 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:16:08.381224  302211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:16:08.381240  302211 out.go:374] Setting ErrFile to fd 2...
	I1102 13:16:08.381246  302211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:16:08.381524  302211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 13:16:08.381828  302211 mustload.go:66] Loading cluster: addons-230560
	I1102 13:16:08.382187  302211 config.go:182] Loaded profile config "addons-230560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:16:08.382206  302211 addons.go:607] checking whether the cluster is paused
	I1102 13:16:08.382307  302211 config.go:182] Loaded profile config "addons-230560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:16:08.382321  302211 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:16:08.382809  302211 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:16:08.402517  302211 ssh_runner.go:195] Run: systemctl --version
	I1102 13:16:08.402576  302211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:16:08.422073  302211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:16:08.529698  302211 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:16:08.529783  302211 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:16:08.563636  302211 cri.go:89] found id: "59d3d49e880a7e09fb4d9be850df44733d6d5116185f5d62db1b5c126b574e0b"
	I1102 13:16:08.563658  302211 cri.go:89] found id: "495997c964878856129fa01c98380dbe27e0d3e3399d552d965363043c0ed285"
	I1102 13:16:08.563663  302211 cri.go:89] found id: "92ee410347c1fecfc99fa6c734d7ea23c7a537dc02964ee119f8cc717fcef3e2"
	I1102 13:16:08.563667  302211 cri.go:89] found id: "14fe9bc0e4e3fca54005005e2faed708854fa4e45837404cf9bd640d6b5e2de6"
	I1102 13:16:08.563670  302211 cri.go:89] found id: "849382b87b03aa7df7b3bd0d7677466f19027eeb542e35e25286f1e8249c940e"
	I1102 13:16:08.563674  302211 cri.go:89] found id: "de7641522a90557a5bf20f6e7fc608045762d4951eef39028dd344fa1ec0e246"
	I1102 13:16:08.563677  302211 cri.go:89] found id: "ce226e80e176fd107a1fd4e99d0423900d376d659984557fa242d51fe29175f6"
	I1102 13:16:08.563680  302211 cri.go:89] found id: "43495555e2c69ab9b146d21dd528f268dcc6b5277bef46a2cdd8aac98ed01981"
	I1102 13:16:08.563683  302211 cri.go:89] found id: "23d26c5efd413a919fa01dc11c652b236e497eb2943a1a1cfaf21109a227fdf8"
	I1102 13:16:08.563689  302211 cri.go:89] found id: "f4000d22ba555b95620554ea649b6b0e65ff2c8de55597628a09a4936558b721"
	I1102 13:16:08.563693  302211 cri.go:89] found id: "571d698a41a0bf933525b4655374feb95afed1edb2640617ab7511cce65f0776"
	I1102 13:16:08.563696  302211 cri.go:89] found id: "b05b32f995002607af838c0a5ffed270958eaf8c7f841b88122803f35d8d2015"
	I1102 13:16:08.563700  302211 cri.go:89] found id: "ece119ee391be38c1a4f223d48708f601e4910a7734c54cbe59f4c38812974b5"
	I1102 13:16:08.563703  302211 cri.go:89] found id: "7d130b18d8ef12edee3e0d7b593a71e0c4b5690b982edfbbf83860e1b5d40c73"
	I1102 13:16:08.563706  302211 cri.go:89] found id: "01cc86f91cc933f1117d93925d4304fd9b0729b04f70bdfda8a3027baef7c8e9"
	I1102 13:16:08.563714  302211 cri.go:89] found id: "7c311915f4fbc67516c8e9c0534f2b294964f9597b308ef3f1372ad8d0e1b2d5"
	I1102 13:16:08.563721  302211 cri.go:89] found id: "2d7e91ed3fc10a735909e92c3d70b5422345ba649e0f465bf27dbb923af7877c"
	I1102 13:16:08.563725  302211 cri.go:89] found id: "b8f72f36b8b681e6188a6ae20fbb9399b5a1bba3a9e3fa05f0101b5f7bd14aac"
	I1102 13:16:08.563729  302211 cri.go:89] found id: "7c3129e8902e2ba546ec94fea95b907a80a88b9f19819ccc547d8e7cd7ddae43"
	I1102 13:16:08.563732  302211 cri.go:89] found id: "ba2b8cd401ace9132335713de0f6619fc89d02ed1a60281902f918001c3a9bc6"
	I1102 13:16:08.563736  302211 cri.go:89] found id: "ae6a81713fca42870850a9a5e0a86e40858cbf49ccdf8f4b701bb7c58d5b250d"
	I1102 13:16:08.563739  302211 cri.go:89] found id: "e520da42d44eee8e7e351ea85bd1e8a1fec19b3c33ded4f2a1188baef7b927e3"
	I1102 13:16:08.563742  302211 cri.go:89] found id: "47bfba99e6f299e3b3448bc8864faaedc77b8f94e548ef086dc4f5981ae0360a"
	I1102 13:16:08.563745  302211 cri.go:89] found id: ""
	I1102 13:16:08.563796  302211 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 13:16:08.578258  302211 out.go:203] 
	W1102 13:16:08.581211  302211 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:16:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:16:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 13:16:08.581243  302211 out.go:285] * 
	* 
	W1102 13:16:08.587932  302211 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 13:16:08.590952  302211 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-230560 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-230560
helpers_test.go:243: (dbg) docker inspect addons-230560:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6c103036ac5bf712bbe6e5c6b8c4fb3f5a69f6a2461bc077906d5c7d591f5293",
	        "Created": "2025-11-02T13:13:28.928338812Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 296345,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-02T13:13:28.996874591Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/6c103036ac5bf712bbe6e5c6b8c4fb3f5a69f6a2461bc077906d5c7d591f5293/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6c103036ac5bf712bbe6e5c6b8c4fb3f5a69f6a2461bc077906d5c7d591f5293/hostname",
	        "HostsPath": "/var/lib/docker/containers/6c103036ac5bf712bbe6e5c6b8c4fb3f5a69f6a2461bc077906d5c7d591f5293/hosts",
	        "LogPath": "/var/lib/docker/containers/6c103036ac5bf712bbe6e5c6b8c4fb3f5a69f6a2461bc077906d5c7d591f5293/6c103036ac5bf712bbe6e5c6b8c4fb3f5a69f6a2461bc077906d5c7d591f5293-json.log",
	        "Name": "/addons-230560",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-230560:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-230560",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6c103036ac5bf712bbe6e5c6b8c4fb3f5a69f6a2461bc077906d5c7d591f5293",
	                "LowerDir": "/var/lib/docker/overlay2/3f0d7197467fa981a71ed5b5652af8516181f2a9adc7a743f5cf92585166f8e4-init/diff:/var/lib/docker/overlay2/53058afe6acd7391f639f551e07f446f6a39a991b699aea18507cb21a1652e97/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3f0d7197467fa981a71ed5b5652af8516181f2a9adc7a743f5cf92585166f8e4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3f0d7197467fa981a71ed5b5652af8516181f2a9adc7a743f5cf92585166f8e4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3f0d7197467fa981a71ed5b5652af8516181f2a9adc7a743f5cf92585166f8e4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-230560",
	                "Source": "/var/lib/docker/volumes/addons-230560/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-230560",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-230560",
	                "name.minikube.sigs.k8s.io": "addons-230560",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2a3120f6f61a03308bdece2c803b2ed8cddf73c7699d02b8b285cae1810ef36c",
	            "SandboxKey": "/var/run/docker/netns/2a3120f6f61a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-230560": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0e:27:00:b7:cb:c9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b844984c8592730a180c752ce42b56b370efb9795cb13a0939b690ade86b755c",
	                    "EndpointID": "982e9f0f7d1378569e6087f70aa7f56b7168809b3a8086307577d4d5af627830",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-230560",
	                        "6c103036ac5b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-230560 -n addons-230560
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p addons-230560 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p addons-230560 logs -n 25: (1.443103005s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-390798 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-390798   │ jenkins │ v1.37.0 │ 02 Nov 25 13:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 02 Nov 25 13:12 UTC │ 02 Nov 25 13:12 UTC │
	│ delete  │ -p download-only-390798                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-390798   │ jenkins │ v1.37.0 │ 02 Nov 25 13:12 UTC │ 02 Nov 25 13:12 UTC │
	│ start   │ -o=json --download-only -p download-only-741875 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-741875   │ jenkins │ v1.37.0 │ 02 Nov 25 13:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 02 Nov 25 13:13 UTC │ 02 Nov 25 13:13 UTC │
	│ delete  │ -p download-only-741875                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-741875   │ jenkins │ v1.37.0 │ 02 Nov 25 13:13 UTC │ 02 Nov 25 13:13 UTC │
	│ delete  │ -p download-only-390798                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-390798   │ jenkins │ v1.37.0 │ 02 Nov 25 13:13 UTC │ 02 Nov 25 13:13 UTC │
	│ delete  │ -p download-only-741875                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-741875   │ jenkins │ v1.37.0 │ 02 Nov 25 13:13 UTC │ 02 Nov 25 13:13 UTC │
	│ start   │ --download-only -p download-docker-513487 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-513487 │ jenkins │ v1.37.0 │ 02 Nov 25 13:13 UTC │                     │
	│ delete  │ -p download-docker-513487                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-513487 │ jenkins │ v1.37.0 │ 02 Nov 25 13:13 UTC │ 02 Nov 25 13:13 UTC │
	│ start   │ --download-only -p binary-mirror-605864 --alsologtostderr --binary-mirror http://127.0.0.1:39709 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-605864   │ jenkins │ v1.37.0 │ 02 Nov 25 13:13 UTC │                     │
	│ delete  │ -p binary-mirror-605864                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-605864   │ jenkins │ v1.37.0 │ 02 Nov 25 13:13 UTC │ 02 Nov 25 13:13 UTC │
	│ addons  │ disable dashboard -p addons-230560                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-230560          │ jenkins │ v1.37.0 │ 02 Nov 25 13:13 UTC │                     │
	│ addons  │ enable dashboard -p addons-230560                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-230560          │ jenkins │ v1.37.0 │ 02 Nov 25 13:13 UTC │                     │
	│ start   │ -p addons-230560 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-230560          │ jenkins │ v1.37.0 │ 02 Nov 25 13:13 UTC │ 02 Nov 25 13:15 UTC │
	│ addons  │ addons-230560 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-230560          │ jenkins │ v1.37.0 │ 02 Nov 25 13:15 UTC │                     │
	│ addons  │ addons-230560 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-230560          │ jenkins │ v1.37.0 │ 02 Nov 25 13:16 UTC │                     │
	│ addons  │ enable headlamp -p addons-230560 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-230560          │ jenkins │ v1.37.0 │ 02 Nov 25 13:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 13:13:04
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 13:13:04.994668  295944 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:13:04.994800  295944 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:13:04.994812  295944 out.go:374] Setting ErrFile to fd 2...
	I1102 13:13:04.994817  295944 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:13:04.995051  295944 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 13:13:04.995504  295944 out.go:368] Setting JSON to false
	I1102 13:13:04.996329  295944 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6937,"bootTime":1762082248,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1102 13:13:04.996395  295944 start.go:143] virtualization:  
	I1102 13:13:04.999928  295944 out.go:179] * [addons-230560] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1102 13:13:05.004092  295944 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 13:13:05.004239  295944 notify.go:221] Checking for updates...
	I1102 13:13:05.010504  295944 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 13:13:05.013701  295944 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-293314/kubeconfig
	I1102 13:13:05.016585  295944 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-293314/.minikube
	I1102 13:13:05.019439  295944 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1102 13:13:05.022411  295944 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 13:13:05.025577  295944 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 13:13:05.050223  295944 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1102 13:13:05.050349  295944 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:13:05.121797  295944 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-02 13:13:05.111475408 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 13:13:05.121925  295944 docker.go:319] overlay module found
	I1102 13:13:05.125028  295944 out.go:179] * Using the docker driver based on user configuration
	I1102 13:13:05.128090  295944 start.go:309] selected driver: docker
	I1102 13:13:05.128110  295944 start.go:930] validating driver "docker" against <nil>
	I1102 13:13:05.128124  295944 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 13:13:05.128853  295944 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:13:05.183169  295944 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-02 13:13:05.174148004 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 13:13:05.183319  295944 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1102 13:13:05.183553  295944 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 13:13:05.186377  295944 out.go:179] * Using Docker driver with root privileges
	I1102 13:13:05.189165  295944 cni.go:84] Creating CNI manager for ""
	I1102 13:13:05.189242  295944 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:13:05.189262  295944 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1102 13:13:05.189355  295944 start.go:353] cluster config:
	{Name:addons-230560 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-230560 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1102 13:13:05.192432  295944 out.go:179] * Starting "addons-230560" primary control-plane node in "addons-230560" cluster
	I1102 13:13:05.195271  295944 cache.go:124] Beginning downloading kic base image for docker with crio
	I1102 13:13:05.198264  295944 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1102 13:13:05.201177  295944 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:13:05.201238  295944 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1102 13:13:05.201252  295944 cache.go:59] Caching tarball of preloaded images
	I1102 13:13:05.201257  295944 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1102 13:13:05.201348  295944 preload.go:233] Found /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1102 13:13:05.201359  295944 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1102 13:13:05.201721  295944 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/config.json ...
	I1102 13:13:05.201752  295944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/config.json: {Name:mk912b3941452a6f2be80f1ba9594fe174cc5a86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:13:05.216560  295944 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1102 13:13:05.216686  295944 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1102 13:13:05.216704  295944 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1102 13:13:05.216709  295944 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1102 13:13:05.216717  295944 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1102 13:13:05.216722  295944 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1102 13:13:22.977326  295944 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1102 13:13:22.977389  295944 cache.go:233] Successfully downloaded all kic artifacts
	I1102 13:13:22.977420  295944 start.go:360] acquireMachinesLock for addons-230560: {Name:mkc4332b46cf87e7f10ba6c63852797379fabd0b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 13:13:22.977553  295944 start.go:364] duration metric: took 114.627µs to acquireMachinesLock for "addons-230560"
	I1102 13:13:22.977580  295944 start.go:93] Provisioning new machine with config: &{Name:addons-230560 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-230560 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 13:13:22.977648  295944 start.go:125] createHost starting for "" (driver="docker")
	I1102 13:13:22.981134  295944 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1102 13:13:22.981388  295944 start.go:159] libmachine.API.Create for "addons-230560" (driver="docker")
	I1102 13:13:22.981421  295944 client.go:173] LocalClient.Create starting
	I1102 13:13:22.981542  295944 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem
	I1102 13:13:23.174058  295944 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem
	I1102 13:13:23.328791  295944 cli_runner.go:164] Run: docker network inspect addons-230560 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1102 13:13:23.343906  295944 cli_runner.go:211] docker network inspect addons-230560 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1102 13:13:23.343998  295944 network_create.go:284] running [docker network inspect addons-230560] to gather additional debugging logs...
	I1102 13:13:23.344019  295944 cli_runner.go:164] Run: docker network inspect addons-230560
	W1102 13:13:23.358727  295944 cli_runner.go:211] docker network inspect addons-230560 returned with exit code 1
	I1102 13:13:23.358759  295944 network_create.go:287] error running [docker network inspect addons-230560]: docker network inspect addons-230560: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-230560 not found
	I1102 13:13:23.358773  295944 network_create.go:289] output of [docker network inspect addons-230560]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-230560 not found
	
	** /stderr **
	I1102 13:13:23.358862  295944 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 13:13:23.374177  295944 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a8e830}
	I1102 13:13:23.374216  295944 network_create.go:124] attempt to create docker network addons-230560 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1102 13:13:23.374277  295944 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-230560 addons-230560
	I1102 13:13:23.434220  295944 network_create.go:108] docker network addons-230560 192.168.49.0/24 created
	I1102 13:13:23.434255  295944 kic.go:121] calculated static IP "192.168.49.2" for the "addons-230560" container
	I1102 13:13:23.434340  295944 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1102 13:13:23.449334  295944 cli_runner.go:164] Run: docker volume create addons-230560 --label name.minikube.sigs.k8s.io=addons-230560 --label created_by.minikube.sigs.k8s.io=true
	I1102 13:13:23.466746  295944 oci.go:103] Successfully created a docker volume addons-230560
	I1102 13:13:23.466839  295944 cli_runner.go:164] Run: docker run --rm --name addons-230560-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-230560 --entrypoint /usr/bin/test -v addons-230560:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1102 13:13:24.468094  295944 cli_runner.go:217] Completed: docker run --rm --name addons-230560-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-230560 --entrypoint /usr/bin/test -v addons-230560:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (1.001202318s)
	I1102 13:13:24.468125  295944 oci.go:107] Successfully prepared a docker volume addons-230560
	I1102 13:13:24.468154  295944 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:13:24.468172  295944 kic.go:194] Starting extracting preloaded images to volume ...
	I1102 13:13:24.468248  295944 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-230560:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1102 13:13:28.862281  295944 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-230560:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.393993212s)
	I1102 13:13:28.862314  295944 kic.go:203] duration metric: took 4.394137869s to extract preloaded images to volume ...
	W1102 13:13:28.862453  295944 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1102 13:13:28.862575  295944 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1102 13:13:28.913968  295944 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-230560 --name addons-230560 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-230560 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-230560 --network addons-230560 --ip 192.168.49.2 --volume addons-230560:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1102 13:13:29.220826  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Running}}
	I1102 13:13:29.245865  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:13:29.268976  295944 cli_runner.go:164] Run: docker exec addons-230560 stat /var/lib/dpkg/alternatives/iptables
	I1102 13:13:29.315687  295944 oci.go:144] the created container "addons-230560" has a running status.
	I1102 13:13:29.315724  295944 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa...
	I1102 13:13:29.446843  295944 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1102 13:13:29.471784  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:13:29.489150  295944 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1102 13:13:29.489172  295944 kic_runner.go:114] Args: [docker exec --privileged addons-230560 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1102 13:13:29.540422  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:13:29.560267  295944 machine.go:94] provisionDockerMachine start ...
	I1102 13:13:29.560356  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:13:29.590961  295944 main.go:143] libmachine: Using SSH client type: native
	I1102 13:13:29.591288  295944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1102 13:13:29.591305  295944 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 13:13:29.591873  295944 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33282->127.0.0.1:33138: read: connection reset by peer
	I1102 13:13:32.742115  295944 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-230560
	
	I1102 13:13:32.742141  295944 ubuntu.go:182] provisioning hostname "addons-230560"
	I1102 13:13:32.742214  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:13:32.758833  295944 main.go:143] libmachine: Using SSH client type: native
	I1102 13:13:32.759148  295944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1102 13:13:32.759166  295944 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-230560 && echo "addons-230560" | sudo tee /etc/hostname
	I1102 13:13:32.917210  295944 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-230560
	
	I1102 13:13:32.917384  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:13:32.936123  295944 main.go:143] libmachine: Using SSH client type: native
	I1102 13:13:32.936441  295944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1102 13:13:32.936463  295944 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-230560' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-230560/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-230560' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 13:13:33.090803  295944 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 13:13:33.090832  295944 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-293314/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-293314/.minikube}
	I1102 13:13:33.090853  295944 ubuntu.go:190] setting up certificates
	I1102 13:13:33.090875  295944 provision.go:84] configureAuth start
	I1102 13:13:33.090949  295944 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-230560
	I1102 13:13:33.107593  295944 provision.go:143] copyHostCerts
	I1102 13:13:33.107676  295944 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem (1675 bytes)
	I1102 13:13:33.107804  295944 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem (1082 bytes)
	I1102 13:13:33.107865  295944 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem (1123 bytes)
	I1102 13:13:33.107929  295944 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem org=jenkins.addons-230560 san=[127.0.0.1 192.168.49.2 addons-230560 localhost minikube]
	I1102 13:13:33.391526  295944 provision.go:177] copyRemoteCerts
	I1102 13:13:33.391593  295944 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 13:13:33.391632  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:13:33.410512  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:13:33.514221  295944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1102 13:13:33.531672  295944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1102 13:13:33.548781  295944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1102 13:13:33.565499  295944 provision.go:87] duration metric: took 474.586198ms to configureAuth
	I1102 13:13:33.565524  295944 ubuntu.go:206] setting minikube options for container-runtime
	I1102 13:13:33.565711  295944 config.go:182] Loaded profile config "addons-230560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:13:33.565811  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:13:33.583385  295944 main.go:143] libmachine: Using SSH client type: native
	I1102 13:13:33.583694  295944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1102 13:13:33.583713  295944 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1102 13:13:33.845120  295944 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1102 13:13:33.845190  295944 machine.go:97] duration metric: took 4.284900425s to provisionDockerMachine
	I1102 13:13:33.845218  295944 client.go:176] duration metric: took 10.863790237s to LocalClient.Create
	I1102 13:13:33.845251  295944 start.go:167] duration metric: took 10.863863321s to libmachine.API.Create "addons-230560"
	I1102 13:13:33.845278  295944 start.go:293] postStartSetup for "addons-230560" (driver="docker")
	I1102 13:13:33.845304  295944 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1102 13:13:33.845391  295944 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1102 13:13:33.845497  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:13:33.862342  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:13:33.966637  295944 ssh_runner.go:195] Run: cat /etc/os-release
	I1102 13:13:33.969828  295944 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1102 13:13:33.969858  295944 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1102 13:13:33.969869  295944 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-293314/.minikube/addons for local assets ...
	I1102 13:13:33.969969  295944 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-293314/.minikube/files for local assets ...
	I1102 13:13:33.969996  295944 start.go:296] duration metric: took 124.696523ms for postStartSetup
	I1102 13:13:33.970307  295944 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-230560
	I1102 13:13:33.986123  295944 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/config.json ...
	I1102 13:13:33.986424  295944 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 13:13:33.986466  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:13:34.004630  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:13:34.107685  295944 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1102 13:13:34.112465  295944 start.go:128] duration metric: took 11.134801307s to createHost
	I1102 13:13:34.112494  295944 start.go:83] releasing machines lock for "addons-230560", held for 11.13492683s
	I1102 13:13:34.112589  295944 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-230560
	I1102 13:13:34.128830  295944 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem (1679 bytes)
	I1102 13:13:34.128884  295944 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem (1082 bytes)
	I1102 13:13:34.128910  295944 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:13:34.128949  295944 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem (1675 bytes)
	W1102 13:13:34.129037  295944 start.go:789] pre-probe CA setup failed: create ca cert file asset for /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt: stat: stat /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt: no such file or directory
	I1102 13:13:34.129115  295944 ssh_runner.go:195] Run: cat /version.json
	I1102 13:13:34.129160  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:13:34.129421  295944 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1102 13:13:34.129478  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:13:34.145098  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:13:34.167541  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:13:34.254090  295944 ssh_runner.go:195] Run: systemctl --version
	I1102 13:13:34.346548  295944 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1102 13:13:34.380753  295944 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1102 13:13:34.384815  295944 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1102 13:13:34.384938  295944 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1102 13:13:34.412754  295944 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1102 13:13:34.412778  295944 start.go:496] detecting cgroup driver to use...
	I1102 13:13:34.412810  295944 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1102 13:13:34.412867  295944 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1102 13:13:34.429005  295944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1102 13:13:34.440843  295944 docker.go:218] disabling cri-docker service (if available) ...
	I1102 13:13:34.440910  295944 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1102 13:13:34.458126  295944 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1102 13:13:34.476225  295944 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1102 13:13:34.596806  295944 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1102 13:13:34.720114  295944 docker.go:234] disabling docker service ...
	I1102 13:13:34.720204  295944 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1102 13:13:34.744584  295944 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1102 13:13:34.757268  295944 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1102 13:13:34.874787  295944 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1102 13:13:34.998813  295944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1102 13:13:35.019975  295944 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1102 13:13:35.033559  295944 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1102 13:13:35.033625  295944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:13:35.042339  295944 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1102 13:13:35.042415  295944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:13:35.050879  295944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:13:35.059576  295944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:13:35.068360  295944 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1102 13:13:35.076556  295944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:13:35.085665  295944 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:13:35.099681  295944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:13:35.108756  295944 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1102 13:13:35.116650  295944 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1102 13:13:35.124174  295944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:13:35.230675  295944 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1102 13:13:35.360254  295944 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1102 13:13:35.360340  295944 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1102 13:13:35.364031  295944 start.go:564] Will wait 60s for crictl version
	I1102 13:13:35.364097  295944 ssh_runner.go:195] Run: which crictl
	I1102 13:13:35.367385  295944 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1102 13:13:35.392140  295944 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1102 13:13:35.392244  295944 ssh_runner.go:195] Run: crio --version
	I1102 13:13:35.419872  295944 ssh_runner.go:195] Run: crio --version
	I1102 13:13:35.449945  295944 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1102 13:13:35.452627  295944 cli_runner.go:164] Run: docker network inspect addons-230560 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 13:13:35.469847  295944 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1102 13:13:35.473587  295944 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:13:35.483169  295944 kubeadm.go:884] updating cluster {Name:addons-230560 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-230560 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1102 13:13:35.483290  295944 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:13:35.483351  295944 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:13:35.515968  295944 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:13:35.515992  295944 crio.go:433] Images already preloaded, skipping extraction
	I1102 13:13:35.516049  295944 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:13:35.543741  295944 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:13:35.543767  295944 cache_images.go:86] Images are preloaded, skipping loading
	I1102 13:13:35.543777  295944 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1102 13:13:35.543870  295944 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-230560 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-230560 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1102 13:13:35.543955  295944 ssh_runner.go:195] Run: crio config
	I1102 13:13:35.616095  295944 cni.go:84] Creating CNI manager for ""
	I1102 13:13:35.616119  295944 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:13:35.616140  295944 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1102 13:13:35.616163  295944 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-230560 NodeName:addons-230560 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1102 13:13:35.616298  295944 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-230560"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1102 13:13:35.616376  295944 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1102 13:13:35.624113  295944 binaries.go:44] Found k8s binaries, skipping transfer
	I1102 13:13:35.624181  295944 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1102 13:13:35.631707  295944 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1102 13:13:35.644511  295944 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1102 13:13:35.656980  295944 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1102 13:13:35.669711  295944 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1102 13:13:35.673203  295944 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 13:13:35.682448  295944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:13:35.794739  295944 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:13:35.810927  295944 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560 for IP: 192.168.49.2
	I1102 13:13:35.810998  295944 certs.go:195] generating shared ca certs ...
	I1102 13:13:35.811030  295944 certs.go:227] acquiring lock for ca certs: {Name:mkead50075949a3cdc798f9c0149a2bc2638cbbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:13:35.811831  295944 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-293314/.minikube/ca.key
	I1102 13:13:36.302297  295944 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt ...
	I1102 13:13:36.302329  295944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt: {Name:mk0d00e414dd47c53e0a467755fbe9f3980454d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:13:36.303138  295944 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-293314/.minikube/ca.key ...
	I1102 13:13:36.303154  295944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/ca.key: {Name:mkfd2b8395ee35f9350e3eb5214162e5e8ec773f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:13:36.303827  295944 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.key
	I1102 13:13:36.467951  295944 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.crt ...
	I1102 13:13:36.467981  295944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.crt: {Name:mkb1e8e00419a95387f597a1df78db401414322e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:13:36.468719  295944 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.key ...
	I1102 13:13:36.468734  295944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.key: {Name:mk6483d473af4e26656971a0d05bcdeb600fd13c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:13:36.468824  295944 certs.go:257] generating profile certs ...
	I1102 13:13:36.468891  295944 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.key
	I1102 13:13:36.468909  295944 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.crt with IP's: []
	I1102 13:13:36.676648  295944 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.crt ...
	I1102 13:13:36.676679  295944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.crt: {Name:mk3d43e5de14e342a7ed167171d2e94a335649bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:13:36.676856  295944 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.key ...
	I1102 13:13:36.676871  295944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.key: {Name:mkdecd6c3a73fc89f7738ff7ba550cc6319ca8c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:13:36.676961  295944 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/apiserver.key.1945bd50
	I1102 13:13:36.676985  295944 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/apiserver.crt.1945bd50 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1102 13:13:37.320546  295944 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/apiserver.crt.1945bd50 ...
	I1102 13:13:37.320578  295944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/apiserver.crt.1945bd50: {Name:mk46fb68b77ed9febc9dee296e4cdde2a2d9e1ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:13:37.321360  295944 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/apiserver.key.1945bd50 ...
	I1102 13:13:37.321378  295944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/apiserver.key.1945bd50: {Name:mk472b46b24a1410d8cf5c6f3b23bd7f6805963f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:13:37.322003  295944 certs.go:382] copying /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/apiserver.crt.1945bd50 -> /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/apiserver.crt
	I1102 13:13:37.322087  295944 certs.go:386] copying /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/apiserver.key.1945bd50 -> /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/apiserver.key
	I1102 13:13:37.322140  295944 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/proxy-client.key
	I1102 13:13:37.322160  295944 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/proxy-client.crt with IP's: []
	I1102 13:13:38.129585  295944 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/proxy-client.crt ...
	I1102 13:13:38.129619  295944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/proxy-client.crt: {Name:mkd4964bdc679505cad906bc56605d7643702dd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:13:38.130357  295944 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/proxy-client.key ...
	I1102 13:13:38.130374  295944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/proxy-client.key: {Name:mka8422cea7376f25883ba8d04e90808483d4653 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:13:38.131119  295944 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem (1679 bytes)
	I1102 13:13:38.131162  295944 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem (1082 bytes)
	I1102 13:13:38.131187  295944 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:13:38.131219  295944 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem (1675 bytes)
	I1102 13:13:38.131746  295944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1102 13:13:38.154430  295944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1102 13:13:38.175747  295944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1102 13:13:38.200072  295944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1102 13:13:38.222361  295944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1102 13:13:38.240117  295944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1102 13:13:38.257547  295944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1102 13:13:38.275039  295944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1102 13:13:38.292828  295944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:13:38.311234  295944 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1102 13:13:38.324546  295944 ssh_runner.go:195] Run: openssl version
	I1102 13:13:38.330750  295944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:13:38.339292  295944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:13:38.343329  295944 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 13:13 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:13:38.343454  295944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:13:38.384438  295944 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:13:38.392990  295944 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1102 13:13:38.396273  295944 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1102 13:13:38.396337  295944 kubeadm.go:401] StartCluster: {Name:addons-230560 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-230560 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:13:38.396411  295944 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:13:38.396471  295944 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:13:38.422511  295944 cri.go:89] found id: ""
	I1102 13:13:38.422586  295944 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1102 13:13:38.430052  295944 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1102 13:13:38.437500  295944 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1102 13:13:38.437594  295944 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1102 13:13:38.444993  295944 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1102 13:13:38.445015  295944 kubeadm.go:158] found existing configuration files:
	
	I1102 13:13:38.445093  295944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1102 13:13:38.452605  295944 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1102 13:13:38.452668  295944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1102 13:13:38.459835  295944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1102 13:13:38.467014  295944 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1102 13:13:38.467081  295944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1102 13:13:38.474123  295944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1102 13:13:38.481654  295944 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1102 13:13:38.481736  295944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1102 13:13:38.489174  295944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1102 13:13:38.496984  295944 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1102 13:13:38.497080  295944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1102 13:13:38.504683  295944 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1102 13:13:38.547902  295944 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1102 13:13:38.547967  295944 kubeadm.go:319] [preflight] Running pre-flight checks
	I1102 13:13:38.571747  295944 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1102 13:13:38.571824  295944 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1102 13:13:38.571867  295944 kubeadm.go:319] OS: Linux
	I1102 13:13:38.571919  295944 kubeadm.go:319] CGROUPS_CPU: enabled
	I1102 13:13:38.571973  295944 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1102 13:13:38.572026  295944 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1102 13:13:38.572080  295944 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1102 13:13:38.572133  295944 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1102 13:13:38.572187  295944 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1102 13:13:38.572240  295944 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1102 13:13:38.572293  295944 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1102 13:13:38.572345  295944 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1102 13:13:38.638348  295944 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1102 13:13:38.638469  295944 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1102 13:13:38.638570  295944 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1102 13:13:38.647092  295944 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1102 13:13:38.653730  295944 out.go:252]   - Generating certificates and keys ...
	I1102 13:13:38.653874  295944 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1102 13:13:38.653989  295944 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1102 13:13:39.087844  295944 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1102 13:13:39.281256  295944 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1102 13:13:39.790026  295944 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1102 13:13:40.065100  295944 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1102 13:13:40.702493  295944 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1102 13:13:40.702932  295944 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-230560 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1102 13:13:41.791862  295944 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1102 13:13:41.792254  295944 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-230560 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1102 13:13:42.983132  295944 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1102 13:13:44.577956  295944 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1102 13:13:44.723439  295944 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1102 13:13:44.723752  295944 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1102 13:13:45.171398  295944 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1102 13:13:46.043822  295944 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1102 13:13:46.975763  295944 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1102 13:13:47.036840  295944 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1102 13:13:47.316624  295944 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1102 13:13:47.317395  295944 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1102 13:13:47.320283  295944 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1102 13:13:47.323623  295944 out.go:252]   - Booting up control plane ...
	I1102 13:13:47.323742  295944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1102 13:13:47.323831  295944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1102 13:13:47.323906  295944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1102 13:13:47.340940  295944 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1102 13:13:47.341083  295944 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1102 13:13:47.349642  295944 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1102 13:13:47.349971  295944 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1102 13:13:47.350184  295944 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1102 13:13:47.497720  295944 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1102 13:13:47.497881  295944 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1102 13:13:49.498980  295944 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001792587s
	I1102 13:13:49.504680  295944 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1102 13:13:49.504778  295944 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1102 13:13:49.505021  295944 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1102 13:13:49.505110  295944 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1102 13:13:54.348077  295944 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.842489481s
	I1102 13:13:55.506886  295944 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001582873s
	I1102 13:13:56.636873  295944 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 7.131873756s
	I1102 13:13:56.661934  295944 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1102 13:13:56.676830  295944 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1102 13:13:56.696276  295944 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1102 13:13:56.696489  295944 kubeadm.go:319] [mark-control-plane] Marking the node addons-230560 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1102 13:13:56.708887  295944 kubeadm.go:319] [bootstrap-token] Using token: m4zpig.5x6anpocnxlq70ej
	I1102 13:13:56.711804  295944 out.go:252]   - Configuring RBAC rules ...
	I1102 13:13:56.711932  295944 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1102 13:13:56.722897  295944 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1102 13:13:56.733094  295944 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1102 13:13:56.737436  295944 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1102 13:13:56.741577  295944 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1102 13:13:56.745522  295944 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1102 13:13:57.044108  295944 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1102 13:13:57.484387  295944 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1102 13:13:58.044653  295944 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1102 13:13:58.045919  295944 kubeadm.go:319] 
	I1102 13:13:58.046013  295944 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1102 13:13:58.046026  295944 kubeadm.go:319] 
	I1102 13:13:58.046108  295944 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1102 13:13:58.046120  295944 kubeadm.go:319] 
	I1102 13:13:58.046147  295944 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1102 13:13:58.046209  295944 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1102 13:13:58.046262  295944 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1102 13:13:58.046266  295944 kubeadm.go:319] 
	I1102 13:13:58.046324  295944 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1102 13:13:58.046328  295944 kubeadm.go:319] 
	I1102 13:13:58.046378  295944 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1102 13:13:58.046384  295944 kubeadm.go:319] 
	I1102 13:13:58.046439  295944 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1102 13:13:58.046517  295944 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1102 13:13:58.046589  295944 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1102 13:13:58.046594  295944 kubeadm.go:319] 
	I1102 13:13:58.046711  295944 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1102 13:13:58.046794  295944 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1102 13:13:58.046798  295944 kubeadm.go:319] 
	I1102 13:13:58.046886  295944 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token m4zpig.5x6anpocnxlq70ej \
	I1102 13:13:58.046994  295944 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:bd4a1f3bddc85f3fc83315ad33165a30aa1cba7ce55898ef9dad8dcc7e8d0eec \
	I1102 13:13:58.047015  295944 kubeadm.go:319] 	--control-plane 
	I1102 13:13:58.047020  295944 kubeadm.go:319] 
	I1102 13:13:58.047108  295944 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1102 13:13:58.047113  295944 kubeadm.go:319] 
	I1102 13:13:58.047198  295944 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token m4zpig.5x6anpocnxlq70ej \
	I1102 13:13:58.047304  295944 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:bd4a1f3bddc85f3fc83315ad33165a30aa1cba7ce55898ef9dad8dcc7e8d0eec 
	I1102 13:13:58.049834  295944 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1102 13:13:58.050072  295944 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1102 13:13:58.050185  295944 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1102 13:13:58.050204  295944 cni.go:84] Creating CNI manager for ""
	I1102 13:13:58.050212  295944 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:13:58.053304  295944 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1102 13:13:58.056329  295944 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1102 13:13:58.060887  295944 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1102 13:13:58.060946  295944 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1102 13:13:58.076297  295944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1102 13:13:58.381908  295944 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1102 13:13:58.382054  295944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:13:58.382133  295944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-230560 minikube.k8s.io/updated_at=2025_11_02T13_13_58_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a minikube.k8s.io/name=addons-230560 minikube.k8s.io/primary=true
	I1102 13:13:58.559349  295944 ops.go:34] apiserver oom_adj: -16
	I1102 13:13:58.597875  295944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:13:59.098694  295944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:13:59.598940  295944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:14:00.098268  295944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:14:00.598819  295944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:14:01.098012  295944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:14:01.598596  295944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:14:02.098875  295944 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 13:14:02.246667  295944 kubeadm.go:1114] duration metric: took 3.864657971s to wait for elevateKubeSystemPrivileges
	I1102 13:14:02.246698  295944 kubeadm.go:403] duration metric: took 23.850364818s to StartCluster
	I1102 13:14:02.246716  295944 settings.go:142] acquiring lock: {Name:mk95f66b3b15e63f58f8c9085c1ffe67cc396dc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:14:02.247376  295944 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-293314/kubeconfig
	I1102 13:14:02.247778  295944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/kubeconfig: {Name:mke5a65554da8fc0fd6a2ea60bed899d5b38ce09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:14:02.248604  295944 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 13:14:02.248746  295944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1102 13:14:02.249014  295944 config.go:182] Loaded profile config "addons-230560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:14:02.249053  295944 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1102 13:14:02.249134  295944 addons.go:70] Setting yakd=true in profile "addons-230560"
	I1102 13:14:02.249152  295944 addons.go:239] Setting addon yakd=true in "addons-230560"
	I1102 13:14:02.249174  295944 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:14:02.249616  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:14:02.249788  295944 addons.go:70] Setting inspektor-gadget=true in profile "addons-230560"
	I1102 13:14:02.249840  295944 addons.go:239] Setting addon inspektor-gadget=true in "addons-230560"
	I1102 13:14:02.249877  295944 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:14:02.250137  295944 addons.go:70] Setting metrics-server=true in profile "addons-230560"
	I1102 13:14:02.250157  295944 addons.go:239] Setting addon metrics-server=true in "addons-230560"
	I1102 13:14:02.250174  295944 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:14:02.250543  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:14:02.250985  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:14:02.254187  295944 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-230560"
	I1102 13:14:02.254277  295944 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-230560"
	I1102 13:14:02.254453  295944 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:14:02.255623  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:14:02.258255  295944 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-230560"
	I1102 13:14:02.258281  295944 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-230560"
	I1102 13:14:02.258610  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:14:02.267433  295944 addons.go:70] Setting registry=true in profile "addons-230560"
	I1102 13:14:02.267470  295944 addons.go:239] Setting addon registry=true in "addons-230560"
	I1102 13:14:02.267511  295944 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:14:02.278700  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:14:02.254303  295944 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-230560"
	I1102 13:14:02.289216  295944 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-230560"
	I1102 13:14:02.289268  295944 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:14:02.289743  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:14:02.294690  295944 addons.go:70] Setting registry-creds=true in profile "addons-230560"
	I1102 13:14:02.294736  295944 addons.go:239] Setting addon registry-creds=true in "addons-230560"
	I1102 13:14:02.294779  295944 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:14:02.295248  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:14:02.254308  295944 addons.go:70] Setting cloud-spanner=true in profile "addons-230560"
	I1102 13:14:02.296712  295944 addons.go:239] Setting addon cloud-spanner=true in "addons-230560"
	I1102 13:14:02.254314  295944 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-230560"
	I1102 13:14:02.296785  295944 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-230560"
	I1102 13:14:02.296819  295944 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:14:02.297337  295944 addons.go:70] Setting storage-provisioner=true in profile "addons-230560"
	I1102 13:14:02.297361  295944 addons.go:239] Setting addon storage-provisioner=true in "addons-230560"
	I1102 13:14:02.297383  295944 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:14:02.297912  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:14:02.254335  295944 addons.go:70] Setting default-storageclass=true in profile "addons-230560"
	I1102 13:14:02.302902  295944 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-230560"
	I1102 13:14:02.303361  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:14:02.254339  295944 addons.go:70] Setting gcp-auth=true in profile "addons-230560"
	I1102 13:14:02.322839  295944 mustload.go:66] Loading cluster: addons-230560
	I1102 13:14:02.323111  295944 config.go:182] Loaded profile config "addons-230560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:14:02.323424  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:14:02.254342  295944 addons.go:70] Setting ingress=true in profile "addons-230560"
	I1102 13:14:02.254346  295944 addons.go:70] Setting ingress-dns=true in profile "addons-230560"
	I1102 13:14:02.329063  295944 addons.go:239] Setting addon ingress-dns=true in "addons-230560"
	I1102 13:14:02.254386  295944 out.go:179] * Verifying Kubernetes components...
	I1102 13:14:02.329210  295944 addons.go:70] Setting volcano=true in profile "addons-230560"
	I1102 13:14:02.329228  295944 addons.go:239] Setting addon volcano=true in "addons-230560"
	I1102 13:14:02.329246  295944 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:14:02.329707  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:14:02.360533  295944 addons.go:70] Setting volumesnapshots=true in profile "addons-230560"
	I1102 13:14:02.360568  295944 addons.go:239] Setting addon volumesnapshots=true in "addons-230560"
	I1102 13:14:02.360602  295944 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:14:02.361106  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:14:02.367692  295944 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:14:02.368304  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:14:02.397838  295944 addons.go:239] Setting addon ingress=true in "addons-230560"
	I1102 13:14:02.397956  295944 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:14:02.398450  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:14:02.446216  295944 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1102 13:14:02.456640  295944 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-230560"
	I1102 13:14:02.456737  295944 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:14:02.457235  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:14:02.470806  295944 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1102 13:14:02.470875  295944 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1102 13:14:02.470964  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:14:02.410238  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:14:02.487252  295944 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1102 13:14:02.497911  295944 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1102 13:14:02.418221  295944 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:14:02.499294  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:14:02.499608  295944 out.go:179]   - Using image docker.io/registry:3.0.0
	I1102 13:14:02.418302  295944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:14:02.498295  295944 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 13:14:02.498299  295944 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1102 13:14:02.498321  295944 addons.go:436] installing /etc/kubernetes/addons/ig-crd.yaml
	I1102 13:14:02.498554  295944 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1102 13:14:02.522926  295944 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1102 13:14:02.522995  295944 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1102 13:14:02.523083  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:14:02.542122  295944 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:14:02.542208  295944 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1102 13:14:02.542312  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:14:02.553882  295944 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1102 13:14:02.554072  295944 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1102 13:14:02.554086  295944 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1102 13:14:02.554167  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:14:02.569487  295944 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1102 13:14:02.569584  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	W1102 13:14:02.570889  295944 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1102 13:14:02.571113  295944 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:14:02.571129  295944 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1102 13:14:02.571171  295944 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1102 13:14:02.575421  295944 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1102 13:14:02.575516  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:14:02.586097  295944 addons.go:239] Setting addon default-storageclass=true in "addons-230560"
	I1102 13:14:02.586138  295944 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:14:02.586579  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:14:02.608189  295944 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1102 13:14:02.608211  295944 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1102 13:14:02.608340  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:14:02.627014  295944 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1102 13:14:02.627035  295944 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1102 13:14:02.627097  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:14:02.646969  295944 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1102 13:14:02.649925  295944 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1102 13:14:02.649952  295944 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1102 13:14:02.650022  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:14:02.660084  295944 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1102 13:14:02.660517  295944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1102 13:14:02.685131  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:14:02.691319  295944 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1102 13:14:02.694849  295944 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1102 13:14:02.694872  295944 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1102 13:14:02.694942  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:14:02.722885  295944 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1102 13:14:02.728399  295944 out.go:179]   - Using image docker.io/busybox:stable
	I1102 13:14:02.734701  295944 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1102 13:14:02.734833  295944 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1102 13:14:02.734843  295944 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1102 13:14:02.734925  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:14:02.743200  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:14:02.755978  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:14:02.766395  295944 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1102 13:14:02.772806  295944 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1102 13:14:02.772881  295944 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1102 13:14:02.772994  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:14:02.795120  295944 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1102 13:14:02.795236  295944 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1102 13:14:02.798890  295944 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1102 13:14:02.799041  295944 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1102 13:14:02.799054  295944 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1102 13:14:02.799115  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:14:02.804629  295944 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1102 13:14:02.809397  295944 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1102 13:14:02.812272  295944 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1102 13:14:02.815154  295944 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1102 13:14:02.820189  295944 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1102 13:14:02.823052  295944 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1102 13:14:02.825729  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:14:02.830975  295944 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1102 13:14:02.830996  295944 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1102 13:14:02.831073  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:14:02.845830  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:14:02.849568  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:14:02.871291  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:14:02.882790  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:14:02.888999  295944 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1102 13:14:02.889019  295944 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1102 13:14:02.889080  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:14:02.890812  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:14:02.922754  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:14:02.949927  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:14:02.957796  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:14:02.960006  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:14:02.964055  295944 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:14:02.971377  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	W1102 13:14:02.976244  295944 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1102 13:14:02.976349  295944 retry.go:31] will retry after 327.15464ms: ssh: handshake failed: EOF
	I1102 13:14:02.992796  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	W1102 13:14:02.993889  295944 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1102 13:14:02.993910  295944 retry.go:31] will retry after 193.34196ms: ssh: handshake failed: EOF
	I1102 13:14:03.141203  295944 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1102 13:14:03.141234  295944 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1102 13:14:03.187025  295944 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1102 13:14:03.187052  295944 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1102 13:14:03.303784  295944 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1102 13:14:03.303851  295944 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1102 13:14:03.338480  295944 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1102 13:14:03.338548  295944 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1102 13:14:03.380749  295944 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1102 13:14:03.380822  295944 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1102 13:14:03.417652  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:14:03.437409  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1102 13:14:03.439615  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1102 13:14:03.464003  295944 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1102 13:14:03.464069  295944 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1102 13:14:03.517543  295944 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1102 13:14:03.517579  295944 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1102 13:14:03.517891  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1102 13:14:03.524793  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1102 13:14:03.566533  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1102 13:14:03.578708  295944 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1102 13:14:03.578729  295944 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1102 13:14:03.584320  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 13:14:03.615307  295944 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1102 13:14:03.615341  295944 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1102 13:14:03.650562  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1102 13:14:03.653242  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1102 13:14:03.657036  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1102 13:14:03.713353  295944 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1102 13:14:03.713419  295944 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1102 13:14:03.725093  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1102 13:14:03.736229  295944 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1102 13:14:03.736256  295944 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1102 13:14:03.784291  295944 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1102 13:14:03.784323  295944 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1102 13:14:03.819338  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1102 13:14:03.898933  295944 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1102 13:14:03.898970  295944 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1102 13:14:03.926828  295944 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1102 13:14:03.926862  295944 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1102 13:14:03.974188  295944 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1102 13:14:03.974233  295944 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1102 13:14:04.142952  295944 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1102 13:14:04.142996  295944 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1102 13:14:04.221317  295944 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1102 13:14:04.221344  295944 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1102 13:14:04.232455  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1102 13:14:04.541334  295944 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1102 13:14:04.541377  295944 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1102 13:14:04.545223  295944 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1102 13:14:04.545249  295944 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1102 13:14:04.816681  295944 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.156128835s)
	I1102 13:14:04.816723  295944 start.go:1013] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1102 13:14:04.817593  295944 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.85351379s)
	I1102 13:14:04.818794  295944 node_ready.go:35] waiting up to 6m0s for node "addons-230560" to be "Ready" ...
	I1102 13:14:04.871352  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1102 13:14:04.875852  295944 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1102 13:14:04.875874  295944 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1102 13:14:05.133225  295944 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1102 13:14:05.133250  295944 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1102 13:14:05.332692  295944 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-230560" context rescaled to 1 replicas
	I1102 13:14:05.385938  295944 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1102 13:14:05.385960  295944 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1102 13:14:05.601591  295944 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1102 13:14:05.601615  295944 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1102 13:14:05.740385  295944 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1102 13:14:05.740411  295944 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1102 13:14:05.992078  295944 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1102 13:14:05.992103  295944 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1102 13:14:06.219155  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1102 13:14:06.874065  295944 node_ready.go:57] node "addons-230560" has "Ready":"False" status (will retry)
	I1102 13:14:07.574906  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.137422577s)
	I1102 13:14:07.575122  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.135446385s)
	I1102 13:14:07.575153  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.05724552s)
	I1102 13:14:07.575184  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.050362766s)
	I1102 13:14:07.575211  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.008648612s)
	I1102 13:14:07.575219  295944 addons.go:480] Verifying addon registry=true in "addons-230560"
	I1102 13:14:07.575418  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.157699571s)
	I1102 13:14:07.578245  295944 out.go:179] * Verifying registry addon...
	I1102 13:14:07.581989  295944 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1102 13:14:07.618951  295944 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1102 13:14:07.618977  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:07.896356  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.311995409s)
	W1102 13:14:07.896390  295944 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:07.896408  295944 retry.go:31] will retry after 350.451249ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:07.896442  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.245862484s)
	I1102 13:14:07.896492  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.243230776s)
	I1102 13:14:07.896533  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.239476128s)
	I1102 13:14:07.899736  295944 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-230560 service yakd-dashboard -n yakd-dashboard
	
	I1102 13:14:08.103549  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:08.247180  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 13:14:08.618977  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:08.631510  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.906339679s)
	I1102 13:14:08.631687  295944 addons.go:480] Verifying addon ingress=true in "addons-230560"
	I1102 13:14:08.631801  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.39930698s)
	I1102 13:14:08.632134  295944 addons.go:480] Verifying addon metrics-server=true in "addons-230560"
	I1102 13:14:08.632039  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.760646867s)
	W1102 13:14:08.632270  295944 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1102 13:14:08.632295  295944 retry.go:31] will retry after 303.155567ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1102 13:14:08.631626  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.812261703s)
	I1102 13:14:08.636919  295944 out.go:179] * Verifying ingress addon...
	I1102 13:14:08.640804  295944 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1102 13:14:08.697766  295944 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1102 13:14:08.697841  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:08.936004  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1102 13:14:09.106874  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:09.135161  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (2.915947042s)
	I1102 13:14:09.135245  295944 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-230560"
	I1102 13:14:09.138268  295944 out.go:179] * Verifying csi-hostpath-driver addon...
	I1102 13:14:09.142247  295944 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1102 13:14:09.199030  295944 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1102 13:14:09.199100  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:09.199494  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1102 13:14:09.322596  295944 node_ready.go:57] node "addons-230560" has "Ready":"False" status (will retry)
	I1102 13:14:09.508971  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.261754261s)
	W1102 13:14:09.509007  295944 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:09.509033  295944 retry.go:31] will retry after 354.750401ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:09.585431  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:09.644351  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:09.645327  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:09.864489  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 13:14:10.086652  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:10.147738  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:10.148211  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:10.280211  295944 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1102 13:14:10.280360  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:14:10.303445  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:14:10.446007  295944 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1102 13:14:10.464388  295944 addons.go:239] Setting addon gcp-auth=true in "addons-230560"
	I1102 13:14:10.464481  295944 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:14:10.464974  295944 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:14:10.494255  295944 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1102 13:14:10.494313  295944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:14:10.520602  295944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:14:10.585368  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:10.644692  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:10.646129  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:11.085619  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:11.144870  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:11.146008  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:11.586205  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:11.644965  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:11.646140  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 13:14:11.822645  295944 node_ready.go:57] node "addons-230560" has "Ready":"False" status (will retry)
	I1102 13:14:11.834689  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.898590789s)
	I1102 13:14:11.834802  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.970277124s)
	W1102 13:14:11.834830  295944 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:11.834828  295944 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.340548443s)
	I1102 13:14:11.834847  295944 retry.go:31] will retry after 416.455245ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:11.838008  295944 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1102 13:14:11.840933  295944 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1102 13:14:11.843870  295944 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1102 13:14:11.843901  295944 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1102 13:14:11.856988  295944 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1102 13:14:11.857010  295944 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1102 13:14:11.870418  295944 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1102 13:14:11.870442  295944 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1102 13:14:11.884119  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1102 13:14:12.085523  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:12.146032  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:12.146845  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:12.251919  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 13:14:12.433179  295944 addons.go:480] Verifying addon gcp-auth=true in "addons-230560"
	I1102 13:14:12.436419  295944 out.go:179] * Verifying gcp-auth addon...
	I1102 13:14:12.440094  295944 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1102 13:14:12.459137  295944 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1102 13:14:12.459163  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:12.585301  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:12.645925  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:12.647586  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:12.943836  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:13.085605  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:13.146937  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:13.147826  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1102 13:14:13.174911  295944 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:13.174948  295944 retry.go:31] will retry after 696.1039ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:13.443962  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:13.586258  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:13.644841  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:13.645399  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:13.871768  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 13:14:13.945535  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:14.085257  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:14.146793  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:14.147417  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1102 13:14:14.322563  295944 node_ready.go:57] node "addons-230560" has "Ready":"False" status (will retry)
	I1102 13:14:14.444173  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:14.584921  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:14.644980  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:14.645684  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1102 13:14:14.662138  295944 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:14.662173  295944 retry.go:31] will retry after 1.32068964s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:14.943255  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:15.085577  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:15.146313  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:15.150531  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:15.444249  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:15.585566  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:15.644737  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:15.644473  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:15.944002  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:15.983122  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 13:14:16.085502  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:16.145449  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:16.147308  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:16.443784  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:16.585516  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:16.646735  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:16.647028  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1102 13:14:16.806020  295944 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:16.806053  295944 retry.go:31] will retry after 1.56368671s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1102 13:14:16.821806  295944 node_ready.go:57] node "addons-230560" has "Ready":"False" status (will retry)
	I1102 13:14:16.943636  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:17.085635  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:17.144931  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:17.147457  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:17.444316  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:17.585697  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:17.644976  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:17.647350  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:17.943045  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:18.085450  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:18.144951  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:18.146116  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:18.370161  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 13:14:18.448279  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:18.585199  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:18.646229  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:18.646673  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1102 13:14:18.822138  295944 node_ready.go:57] node "addons-230560" has "Ready":"False" status (will retry)
	I1102 13:14:18.943560  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:19.085842  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:19.145834  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:19.146164  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1102 13:14:19.183059  295944 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:19.183139  295944 retry.go:31] will retry after 4.205499652s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:19.444203  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:19.585066  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:19.643849  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:19.645512  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:19.943228  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:20.085993  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:20.144207  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:20.146341  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:20.442943  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:20.584778  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:20.645783  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:20.645832  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:20.943698  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:21.085776  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:21.144658  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:21.145508  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 13:14:21.322510  295944 node_ready.go:57] node "addons-230560" has "Ready":"False" status (will retry)
	I1102 13:14:21.443593  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:21.585342  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:21.644440  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:21.645294  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:21.943280  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:22.085155  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:22.143922  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:22.146149  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:22.443982  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:22.584950  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:22.644614  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:22.645693  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:22.943376  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:23.085242  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:23.144721  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:23.146396  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:23.389694  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 13:14:23.443714  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:23.585711  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:23.646249  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:23.648385  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 13:14:23.822747  295944 node_ready.go:57] node "addons-230560" has "Ready":"False" status (will retry)
	I1102 13:14:23.944601  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:24.085393  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:24.144822  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:24.146172  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 13:14:24.207129  295944 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:24.207212  295944 retry.go:31] will retry after 4.940916324s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:24.443041  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:24.584678  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:24.644831  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:24.645528  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:24.943739  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:25.085082  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:25.147121  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:25.147502  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:25.443941  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:25.586106  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:25.644794  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:25.645480  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:25.943648  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:26.085550  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:26.144712  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:26.145316  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 13:14:26.322312  295944 node_ready.go:57] node "addons-230560" has "Ready":"False" status (will retry)
	I1102 13:14:26.443107  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:26.585014  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:26.644803  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:26.645844  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:26.943888  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:27.085976  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:27.144687  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:27.144915  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:27.444096  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:27.585228  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:27.644118  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:27.645249  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:27.943733  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:28.085628  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:28.147579  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:28.147699  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 13:14:28.330975  295944 node_ready.go:57] node "addons-230560" has "Ready":"False" status (will retry)
	I1102 13:14:28.442840  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:28.585764  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:28.644689  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:28.646406  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:28.943348  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:29.085143  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:29.143851  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:29.146207  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:29.148481  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 13:14:29.443579  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:29.586123  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:29.645656  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:29.645958  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:29.944060  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1102 13:14:29.992871  295944 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:29.992966  295944 retry.go:31] will retry after 9.501925941s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:30.088974  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:30.144591  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:30.147188  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:30.443011  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:30.585858  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:30.644256  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:30.646231  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 13:14:30.822450  295944 node_ready.go:57] node "addons-230560" has "Ready":"False" status (will retry)
	I1102 13:14:30.943507  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:31.085849  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:31.144859  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:31.145882  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:31.451577  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:31.591748  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:31.643712  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:31.646200  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:31.943286  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:32.085358  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:32.145611  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:32.145739  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:32.444046  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:32.585194  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:32.644466  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:32.646055  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 13:14:32.822695  295944 node_ready.go:57] node "addons-230560" has "Ready":"False" status (will retry)
	I1102 13:14:32.943880  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:33.085818  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:33.145579  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:33.145702  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:33.443229  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:33.585109  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:33.644314  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:33.646744  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:33.943450  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:34.085573  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:34.144791  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:34.145857  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:34.444064  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:34.585117  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:34.644659  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:34.645793  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:34.943962  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:35.085493  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:35.144911  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:35.146362  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 13:14:35.322954  295944 node_ready.go:57] node "addons-230560" has "Ready":"False" status (will retry)
	I1102 13:14:35.443759  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:35.585972  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:35.643794  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:35.645662  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:35.946880  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:36.085794  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:36.143817  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:36.146013  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:36.442972  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:36.584672  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:36.644803  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:36.644897  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:36.943293  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:37.085313  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:37.145443  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:37.145636  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:37.443711  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:37.589993  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:37.644335  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:37.645980  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 13:14:37.821705  295944 node_ready.go:57] node "addons-230560" has "Ready":"False" status (will retry)
	I1102 13:14:37.943507  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:38.085370  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:38.145680  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:38.146086  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:38.443640  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:38.585476  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:38.645870  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:38.646014  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:38.943438  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:39.085602  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:39.145871  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:39.146181  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:39.443238  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:39.495508  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 13:14:39.585561  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:39.646602  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:39.647089  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1102 13:14:39.822560  295944 node_ready.go:57] node "addons-230560" has "Ready":"False" status (will retry)
	I1102 13:14:39.943427  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:40.085780  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:40.147184  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:40.148019  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1102 13:14:40.311378  295944 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:40.311408  295944 retry.go:31] will retry after 8.384258682s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:40.443295  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:40.585233  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:40.644873  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:40.645533  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:40.943650  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:41.087361  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:41.144177  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:41.145696  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:41.443713  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:41.585751  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:41.645647  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:41.645873  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:41.943711  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:42.086084  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:42.145635  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:42.149458  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 13:14:42.322531  295944 node_ready.go:57] node "addons-230560" has "Ready":"False" status (will retry)
	I1102 13:14:42.443584  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:42.585844  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:42.644635  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:42.645692  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:42.943537  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:43.085473  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:43.145626  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:43.145834  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:43.443550  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:43.585540  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:43.644678  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:43.646336  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:43.943156  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:44.085310  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:44.145383  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:44.145514  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1102 13:14:44.322660  295944 node_ready.go:57] node "addons-230560" has "Ready":"False" status (will retry)
	I1102 13:14:44.523473  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:44.595783  295944 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1102 13:14:44.595810  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:44.654532  295944 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1102 13:14:44.654558  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:44.655735  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:44.884875  295944 node_ready.go:49] node "addons-230560" is "Ready"
	I1102 13:14:44.884908  295944 node_ready.go:38] duration metric: took 40.066089209s for node "addons-230560" to be "Ready" ...
	I1102 13:14:44.884934  295944 api_server.go:52] waiting for apiserver process to appear ...
	I1102 13:14:44.884998  295944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 13:14:44.910534  295944 api_server.go:72] duration metric: took 42.661884889s to wait for apiserver process to appear ...
	I1102 13:14:44.910562  295944 api_server.go:88] waiting for apiserver healthz status ...
	I1102 13:14:44.910583  295944 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1102 13:14:44.921263  295944 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1102 13:14:44.922555  295944 api_server.go:141] control plane version: v1.34.1
	I1102 13:14:44.922584  295944 api_server.go:131] duration metric: took 12.011459ms to wait for apiserver health ...
	I1102 13:14:44.922593  295944 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 13:14:44.933363  295944 system_pods.go:59] 19 kube-system pods found
	I1102 13:14:44.933400  295944 system_pods.go:61] "coredns-66bc5c9577-6rft9" [5b0e5e4b-ac40-44ba-8e2b-3f54328cc03c] Pending
	I1102 13:14:44.933412  295944 system_pods.go:61] "csi-hostpath-attacher-0" [86982496-2936-427c-8bd2-143ec9d85d4b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1102 13:14:44.933418  295944 system_pods.go:61] "csi-hostpath-resizer-0" [5254ca1f-81fa-460f-b1f4-b29debc9a19c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1102 13:14:44.933425  295944 system_pods.go:61] "csi-hostpathplugin-gnxtb" [8ffd4cf3-a402-41bb-882b-c2bbcfa42b1d] Pending
	I1102 13:14:44.933431  295944 system_pods.go:61] "etcd-addons-230560" [a35f5cb3-d29f-45ec-b91d-18d67639c4c8] Running
	I1102 13:14:44.933435  295944 system_pods.go:61] "kindnet-5dpxs" [577f8605-d5ec-4adc-aae2-56c098398734] Running
	I1102 13:14:44.933442  295944 system_pods.go:61] "kube-apiserver-addons-230560" [ec8d7fe3-227e-43c7-b35f-12f3e6c724a5] Running
	I1102 13:14:44.933446  295944 system_pods.go:61] "kube-controller-manager-addons-230560" [910be431-8bbf-4929-bce6-6d85b250063c] Running
	I1102 13:14:44.933459  295944 system_pods.go:61] "kube-ingress-dns-minikube" [afb0b7f8-4856-42f1-871f-197c757927fa] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1102 13:14:44.933470  295944 system_pods.go:61] "kube-proxy-dzts7" [b63fb7f2-7a3e-4ee4-92d8-6ee0a88acebb] Running
	I1102 13:14:44.933475  295944 system_pods.go:61] "kube-scheduler-addons-230560" [c869faf3-5fdf-4dd6-b3ae-a5c03b7b9ca3] Running
	I1102 13:14:44.933482  295944 system_pods.go:61] "metrics-server-85b7d694d7-npk5l" [828b803a-a751-44ee-9dfe-b0ffbba104f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1102 13:14:44.933493  295944 system_pods.go:61] "nvidia-device-plugin-daemonset-qkqx4" [8db4aae3-2657-4773-b5b6-62fb681edaa0] Pending
	I1102 13:14:44.933500  295944 system_pods.go:61] "registry-6b586f9694-qlm8d" [d973110c-93dd-4878-bcf2-c23a761ada84] Pending
	I1102 13:14:44.933505  295944 system_pods.go:61] "registry-creds-764b6fb674-5ssmw" [ad1063de-4f14-47bb-a909-fea786b4406a] Pending
	I1102 13:14:44.933514  295944 system_pods.go:61] "registry-proxy-gk6xb" [bd4d6b0b-09b4-4d00-8a1f-01347f478af8] Pending
	I1102 13:14:44.933521  295944 system_pods.go:61] "snapshot-controller-7d9fbc56b8-rrmp6" [958df77e-6b38-4b22-b999-53d4f5e9d784] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1102 13:14:44.933536  295944 system_pods.go:61] "snapshot-controller-7d9fbc56b8-v88xw" [95d508cf-77db-4d04-aed4-ee3059235c7f] Pending
	I1102 13:14:44.933542  295944 system_pods.go:61] "storage-provisioner" [d2ef88eb-9da4-47ad-b13b-231eb6b4242b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 13:14:44.933548  295944 system_pods.go:74] duration metric: took 10.949827ms to wait for pod list to return data ...
	I1102 13:14:44.933562  295944 default_sa.go:34] waiting for default service account to be created ...
	I1102 13:14:44.945192  295944 default_sa.go:45] found service account: "default"
	I1102 13:14:44.945229  295944 default_sa.go:55] duration metric: took 11.652529ms for default service account to be created ...
	I1102 13:14:44.945240  295944 system_pods.go:116] waiting for k8s-apps to be running ...
	I1102 13:14:44.951922  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:44.953764  295944 system_pods.go:86] 19 kube-system pods found
	I1102 13:14:44.953813  295944 system_pods.go:89] "coredns-66bc5c9577-6rft9" [5b0e5e4b-ac40-44ba-8e2b-3f54328cc03c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:14:44.953825  295944 system_pods.go:89] "csi-hostpath-attacher-0" [86982496-2936-427c-8bd2-143ec9d85d4b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1102 13:14:44.953833  295944 system_pods.go:89] "csi-hostpath-resizer-0" [5254ca1f-81fa-460f-b1f4-b29debc9a19c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1102 13:14:44.953837  295944 system_pods.go:89] "csi-hostpathplugin-gnxtb" [8ffd4cf3-a402-41bb-882b-c2bbcfa42b1d] Pending
	I1102 13:14:44.953842  295944 system_pods.go:89] "etcd-addons-230560" [a35f5cb3-d29f-45ec-b91d-18d67639c4c8] Running
	I1102 13:14:44.953847  295944 system_pods.go:89] "kindnet-5dpxs" [577f8605-d5ec-4adc-aae2-56c098398734] Running
	I1102 13:14:44.953866  295944 system_pods.go:89] "kube-apiserver-addons-230560" [ec8d7fe3-227e-43c7-b35f-12f3e6c724a5] Running
	I1102 13:14:44.953877  295944 system_pods.go:89] "kube-controller-manager-addons-230560" [910be431-8bbf-4929-bce6-6d85b250063c] Running
	I1102 13:14:44.953885  295944 system_pods.go:89] "kube-ingress-dns-minikube" [afb0b7f8-4856-42f1-871f-197c757927fa] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1102 13:14:44.953897  295944 system_pods.go:89] "kube-proxy-dzts7" [b63fb7f2-7a3e-4ee4-92d8-6ee0a88acebb] Running
	I1102 13:14:44.953902  295944 system_pods.go:89] "kube-scheduler-addons-230560" [c869faf3-5fdf-4dd6-b3ae-a5c03b7b9ca3] Running
	I1102 13:14:44.953909  295944 system_pods.go:89] "metrics-server-85b7d694d7-npk5l" [828b803a-a751-44ee-9dfe-b0ffbba104f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1102 13:14:44.953917  295944 system_pods.go:89] "nvidia-device-plugin-daemonset-qkqx4" [8db4aae3-2657-4773-b5b6-62fb681edaa0] Pending
	I1102 13:14:44.953921  295944 system_pods.go:89] "registry-6b586f9694-qlm8d" [d973110c-93dd-4878-bcf2-c23a761ada84] Pending
	I1102 13:14:44.953925  295944 system_pods.go:89] "registry-creds-764b6fb674-5ssmw" [ad1063de-4f14-47bb-a909-fea786b4406a] Pending
	I1102 13:14:44.953936  295944 system_pods.go:89] "registry-proxy-gk6xb" [bd4d6b0b-09b4-4d00-8a1f-01347f478af8] Pending
	I1102 13:14:44.953945  295944 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rrmp6" [958df77e-6b38-4b22-b999-53d4f5e9d784] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1102 13:14:44.953954  295944 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v88xw" [95d508cf-77db-4d04-aed4-ee3059235c7f] Pending
	I1102 13:14:44.953966  295944 system_pods.go:89] "storage-provisioner" [d2ef88eb-9da4-47ad-b13b-231eb6b4242b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 13:14:44.953981  295944 retry.go:31] will retry after 254.75903ms: missing components: kube-dns
	I1102 13:14:45.087066  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:45.179074  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:45.179600  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:45.241422  295944 system_pods.go:86] 19 kube-system pods found
	I1102 13:14:45.241490  295944 system_pods.go:89] "coredns-66bc5c9577-6rft9" [5b0e5e4b-ac40-44ba-8e2b-3f54328cc03c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:14:45.241501  295944 system_pods.go:89] "csi-hostpath-attacher-0" [86982496-2936-427c-8bd2-143ec9d85d4b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1102 13:14:45.241510  295944 system_pods.go:89] "csi-hostpath-resizer-0" [5254ca1f-81fa-460f-b1f4-b29debc9a19c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1102 13:14:45.241514  295944 system_pods.go:89] "csi-hostpathplugin-gnxtb" [8ffd4cf3-a402-41bb-882b-c2bbcfa42b1d] Pending
	I1102 13:14:45.241519  295944 system_pods.go:89] "etcd-addons-230560" [a35f5cb3-d29f-45ec-b91d-18d67639c4c8] Running
	I1102 13:14:45.241524  295944 system_pods.go:89] "kindnet-5dpxs" [577f8605-d5ec-4adc-aae2-56c098398734] Running
	I1102 13:14:45.241530  295944 system_pods.go:89] "kube-apiserver-addons-230560" [ec8d7fe3-227e-43c7-b35f-12f3e6c724a5] Running
	I1102 13:14:45.241534  295944 system_pods.go:89] "kube-controller-manager-addons-230560" [910be431-8bbf-4929-bce6-6d85b250063c] Running
	I1102 13:14:45.241543  295944 system_pods.go:89] "kube-ingress-dns-minikube" [afb0b7f8-4856-42f1-871f-197c757927fa] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1102 13:14:45.241558  295944 system_pods.go:89] "kube-proxy-dzts7" [b63fb7f2-7a3e-4ee4-92d8-6ee0a88acebb] Running
	I1102 13:14:45.241572  295944 system_pods.go:89] "kube-scheduler-addons-230560" [c869faf3-5fdf-4dd6-b3ae-a5c03b7b9ca3] Running
	I1102 13:14:45.241579  295944 system_pods.go:89] "metrics-server-85b7d694d7-npk5l" [828b803a-a751-44ee-9dfe-b0ffbba104f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1102 13:14:45.241584  295944 system_pods.go:89] "nvidia-device-plugin-daemonset-qkqx4" [8db4aae3-2657-4773-b5b6-62fb681edaa0] Pending
	I1102 13:14:45.241598  295944 system_pods.go:89] "registry-6b586f9694-qlm8d" [d973110c-93dd-4878-bcf2-c23a761ada84] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1102 13:14:45.241605  295944 system_pods.go:89] "registry-creds-764b6fb674-5ssmw" [ad1063de-4f14-47bb-a909-fea786b4406a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1102 13:14:45.241619  295944 system_pods.go:89] "registry-proxy-gk6xb" [bd4d6b0b-09b4-4d00-8a1f-01347f478af8] Pending
	I1102 13:14:45.241633  295944 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rrmp6" [958df77e-6b38-4b22-b999-53d4f5e9d784] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1102 13:14:45.241641  295944 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v88xw" [95d508cf-77db-4d04-aed4-ee3059235c7f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1102 13:14:45.241649  295944 system_pods.go:89] "storage-provisioner" [d2ef88eb-9da4-47ad-b13b-231eb6b4242b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 13:14:45.241673  295944 retry.go:31] will retry after 314.788714ms: missing components: kube-dns
	I1102 13:14:45.444827  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:45.563675  295944 system_pods.go:86] 19 kube-system pods found
	I1102 13:14:45.563721  295944 system_pods.go:89] "coredns-66bc5c9577-6rft9" [5b0e5e4b-ac40-44ba-8e2b-3f54328cc03c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:14:45.563731  295944 system_pods.go:89] "csi-hostpath-attacher-0" [86982496-2936-427c-8bd2-143ec9d85d4b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1102 13:14:45.563739  295944 system_pods.go:89] "csi-hostpath-resizer-0" [5254ca1f-81fa-460f-b1f4-b29debc9a19c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1102 13:14:45.563746  295944 system_pods.go:89] "csi-hostpathplugin-gnxtb" [8ffd4cf3-a402-41bb-882b-c2bbcfa42b1d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1102 13:14:45.563756  295944 system_pods.go:89] "etcd-addons-230560" [a35f5cb3-d29f-45ec-b91d-18d67639c4c8] Running
	I1102 13:14:45.563761  295944 system_pods.go:89] "kindnet-5dpxs" [577f8605-d5ec-4adc-aae2-56c098398734] Running
	I1102 13:14:45.563772  295944 system_pods.go:89] "kube-apiserver-addons-230560" [ec8d7fe3-227e-43c7-b35f-12f3e6c724a5] Running
	I1102 13:14:45.563792  295944 system_pods.go:89] "kube-controller-manager-addons-230560" [910be431-8bbf-4929-bce6-6d85b250063c] Running
	I1102 13:14:45.563799  295944 system_pods.go:89] "kube-ingress-dns-minikube" [afb0b7f8-4856-42f1-871f-197c757927fa] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1102 13:14:45.563805  295944 system_pods.go:89] "kube-proxy-dzts7" [b63fb7f2-7a3e-4ee4-92d8-6ee0a88acebb] Running
	I1102 13:14:45.563814  295944 system_pods.go:89] "kube-scheduler-addons-230560" [c869faf3-5fdf-4dd6-b3ae-a5c03b7b9ca3] Running
	I1102 13:14:45.563821  295944 system_pods.go:89] "metrics-server-85b7d694d7-npk5l" [828b803a-a751-44ee-9dfe-b0ffbba104f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1102 13:14:45.563837  295944 system_pods.go:89] "nvidia-device-plugin-daemonset-qkqx4" [8db4aae3-2657-4773-b5b6-62fb681edaa0] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1102 13:14:45.563845  295944 system_pods.go:89] "registry-6b586f9694-qlm8d" [d973110c-93dd-4878-bcf2-c23a761ada84] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1102 13:14:45.563860  295944 system_pods.go:89] "registry-creds-764b6fb674-5ssmw" [ad1063de-4f14-47bb-a909-fea786b4406a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1102 13:14:45.563871  295944 system_pods.go:89] "registry-proxy-gk6xb" [bd4d6b0b-09b4-4d00-8a1f-01347f478af8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1102 13:14:45.563878  295944 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rrmp6" [958df77e-6b38-4b22-b999-53d4f5e9d784] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1102 13:14:45.563891  295944 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v88xw" [95d508cf-77db-4d04-aed4-ee3059235c7f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1102 13:14:45.563900  295944 system_pods.go:89] "storage-provisioner" [d2ef88eb-9da4-47ad-b13b-231eb6b4242b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 13:14:45.563920  295944 retry.go:31] will retry after 340.133045ms: missing components: kube-dns
	I1102 13:14:45.586028  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:45.646590  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:45.647236  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:45.919002  295944 system_pods.go:86] 19 kube-system pods found
	I1102 13:14:45.919038  295944 system_pods.go:89] "coredns-66bc5c9577-6rft9" [5b0e5e4b-ac40-44ba-8e2b-3f54328cc03c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:14:45.919049  295944 system_pods.go:89] "csi-hostpath-attacher-0" [86982496-2936-427c-8bd2-143ec9d85d4b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1102 13:14:45.919064  295944 system_pods.go:89] "csi-hostpath-resizer-0" [5254ca1f-81fa-460f-b1f4-b29debc9a19c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1102 13:14:45.919072  295944 system_pods.go:89] "csi-hostpathplugin-gnxtb" [8ffd4cf3-a402-41bb-882b-c2bbcfa42b1d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1102 13:14:45.919077  295944 system_pods.go:89] "etcd-addons-230560" [a35f5cb3-d29f-45ec-b91d-18d67639c4c8] Running
	I1102 13:14:45.919082  295944 system_pods.go:89] "kindnet-5dpxs" [577f8605-d5ec-4adc-aae2-56c098398734] Running
	I1102 13:14:45.919087  295944 system_pods.go:89] "kube-apiserver-addons-230560" [ec8d7fe3-227e-43c7-b35f-12f3e6c724a5] Running
	I1102 13:14:45.919095  295944 system_pods.go:89] "kube-controller-manager-addons-230560" [910be431-8bbf-4929-bce6-6d85b250063c] Running
	I1102 13:14:45.919102  295944 system_pods.go:89] "kube-ingress-dns-minikube" [afb0b7f8-4856-42f1-871f-197c757927fa] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1102 13:14:45.919111  295944 system_pods.go:89] "kube-proxy-dzts7" [b63fb7f2-7a3e-4ee4-92d8-6ee0a88acebb] Running
	I1102 13:14:45.919117  295944 system_pods.go:89] "kube-scheduler-addons-230560" [c869faf3-5fdf-4dd6-b3ae-a5c03b7b9ca3] Running
	I1102 13:14:45.919124  295944 system_pods.go:89] "metrics-server-85b7d694d7-npk5l" [828b803a-a751-44ee-9dfe-b0ffbba104f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1102 13:14:45.919140  295944 system_pods.go:89] "nvidia-device-plugin-daemonset-qkqx4" [8db4aae3-2657-4773-b5b6-62fb681edaa0] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1102 13:14:45.919152  295944 system_pods.go:89] "registry-6b586f9694-qlm8d" [d973110c-93dd-4878-bcf2-c23a761ada84] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1102 13:14:45.919161  295944 system_pods.go:89] "registry-creds-764b6fb674-5ssmw" [ad1063de-4f14-47bb-a909-fea786b4406a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1102 13:14:45.919171  295944 system_pods.go:89] "registry-proxy-gk6xb" [bd4d6b0b-09b4-4d00-8a1f-01347f478af8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1102 13:14:45.919178  295944 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rrmp6" [958df77e-6b38-4b22-b999-53d4f5e9d784] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1102 13:14:45.919184  295944 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v88xw" [95d508cf-77db-4d04-aed4-ee3059235c7f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1102 13:14:45.919192  295944 system_pods.go:89] "storage-provisioner" [d2ef88eb-9da4-47ad-b13b-231eb6b4242b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 13:14:45.919216  295944 retry.go:31] will retry after 413.151336ms: missing components: kube-dns
	I1102 13:14:45.944882  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:46.087140  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:46.144664  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:46.146454  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:46.341305  295944 system_pods.go:86] 19 kube-system pods found
	I1102 13:14:46.341353  295944 system_pods.go:89] "coredns-66bc5c9577-6rft9" [5b0e5e4b-ac40-44ba-8e2b-3f54328cc03c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:14:46.341365  295944 system_pods.go:89] "csi-hostpath-attacher-0" [86982496-2936-427c-8bd2-143ec9d85d4b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1102 13:14:46.341380  295944 system_pods.go:89] "csi-hostpath-resizer-0" [5254ca1f-81fa-460f-b1f4-b29debc9a19c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1102 13:14:46.341392  295944 system_pods.go:89] "csi-hostpathplugin-gnxtb" [8ffd4cf3-a402-41bb-882b-c2bbcfa42b1d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1102 13:14:46.341405  295944 system_pods.go:89] "etcd-addons-230560" [a35f5cb3-d29f-45ec-b91d-18d67639c4c8] Running
	I1102 13:14:46.341429  295944 system_pods.go:89] "kindnet-5dpxs" [577f8605-d5ec-4adc-aae2-56c098398734] Running
	I1102 13:14:46.341434  295944 system_pods.go:89] "kube-apiserver-addons-230560" [ec8d7fe3-227e-43c7-b35f-12f3e6c724a5] Running
	I1102 13:14:46.341444  295944 system_pods.go:89] "kube-controller-manager-addons-230560" [910be431-8bbf-4929-bce6-6d85b250063c] Running
	I1102 13:14:46.341455  295944 system_pods.go:89] "kube-ingress-dns-minikube" [afb0b7f8-4856-42f1-871f-197c757927fa] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1102 13:14:46.341461  295944 system_pods.go:89] "kube-proxy-dzts7" [b63fb7f2-7a3e-4ee4-92d8-6ee0a88acebb] Running
	I1102 13:14:46.341466  295944 system_pods.go:89] "kube-scheduler-addons-230560" [c869faf3-5fdf-4dd6-b3ae-a5c03b7b9ca3] Running
	I1102 13:14:46.341473  295944 system_pods.go:89] "metrics-server-85b7d694d7-npk5l" [828b803a-a751-44ee-9dfe-b0ffbba104f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1102 13:14:46.341493  295944 system_pods.go:89] "nvidia-device-plugin-daemonset-qkqx4" [8db4aae3-2657-4773-b5b6-62fb681edaa0] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1102 13:14:46.341507  295944 system_pods.go:89] "registry-6b586f9694-qlm8d" [d973110c-93dd-4878-bcf2-c23a761ada84] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1102 13:14:46.341518  295944 system_pods.go:89] "registry-creds-764b6fb674-5ssmw" [ad1063de-4f14-47bb-a909-fea786b4406a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1102 13:14:46.341529  295944 system_pods.go:89] "registry-proxy-gk6xb" [bd4d6b0b-09b4-4d00-8a1f-01347f478af8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1102 13:14:46.341536  295944 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rrmp6" [958df77e-6b38-4b22-b999-53d4f5e9d784] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1102 13:14:46.341551  295944 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v88xw" [95d508cf-77db-4d04-aed4-ee3059235c7f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1102 13:14:46.341564  295944 system_pods.go:89] "storage-provisioner" [d2ef88eb-9da4-47ad-b13b-231eb6b4242b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 13:14:46.341584  295944 retry.go:31] will retry after 690.623926ms: missing components: kube-dns
	I1102 13:14:46.466071  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:46.589269  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:46.689556  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:46.689733  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:46.943706  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:47.038475  295944 system_pods.go:86] 19 kube-system pods found
	I1102 13:14:47.038514  295944 system_pods.go:89] "coredns-66bc5c9577-6rft9" [5b0e5e4b-ac40-44ba-8e2b-3f54328cc03c] Running
	I1102 13:14:47.038526  295944 system_pods.go:89] "csi-hostpath-attacher-0" [86982496-2936-427c-8bd2-143ec9d85d4b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1102 13:14:47.038536  295944 system_pods.go:89] "csi-hostpath-resizer-0" [5254ca1f-81fa-460f-b1f4-b29debc9a19c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1102 13:14:47.038544  295944 system_pods.go:89] "csi-hostpathplugin-gnxtb" [8ffd4cf3-a402-41bb-882b-c2bbcfa42b1d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1102 13:14:47.038554  295944 system_pods.go:89] "etcd-addons-230560" [a35f5cb3-d29f-45ec-b91d-18d67639c4c8] Running
	I1102 13:14:47.038560  295944 system_pods.go:89] "kindnet-5dpxs" [577f8605-d5ec-4adc-aae2-56c098398734] Running
	I1102 13:14:47.038570  295944 system_pods.go:89] "kube-apiserver-addons-230560" [ec8d7fe3-227e-43c7-b35f-12f3e6c724a5] Running
	I1102 13:14:47.038581  295944 system_pods.go:89] "kube-controller-manager-addons-230560" [910be431-8bbf-4929-bce6-6d85b250063c] Running
	I1102 13:14:47.038592  295944 system_pods.go:89] "kube-ingress-dns-minikube" [afb0b7f8-4856-42f1-871f-197c757927fa] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1102 13:14:47.038596  295944 system_pods.go:89] "kube-proxy-dzts7" [b63fb7f2-7a3e-4ee4-92d8-6ee0a88acebb] Running
	I1102 13:14:47.038601  295944 system_pods.go:89] "kube-scheduler-addons-230560" [c869faf3-5fdf-4dd6-b3ae-a5c03b7b9ca3] Running
	I1102 13:14:47.038607  295944 system_pods.go:89] "metrics-server-85b7d694d7-npk5l" [828b803a-a751-44ee-9dfe-b0ffbba104f3] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1102 13:14:47.038647  295944 system_pods.go:89] "nvidia-device-plugin-daemonset-qkqx4" [8db4aae3-2657-4773-b5b6-62fb681edaa0] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1102 13:14:47.038654  295944 system_pods.go:89] "registry-6b586f9694-qlm8d" [d973110c-93dd-4878-bcf2-c23a761ada84] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1102 13:14:47.038660  295944 system_pods.go:89] "registry-creds-764b6fb674-5ssmw" [ad1063de-4f14-47bb-a909-fea786b4406a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1102 13:14:47.038667  295944 system_pods.go:89] "registry-proxy-gk6xb" [bd4d6b0b-09b4-4d00-8a1f-01347f478af8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1102 13:14:47.038680  295944 system_pods.go:89] "snapshot-controller-7d9fbc56b8-rrmp6" [958df77e-6b38-4b22-b999-53d4f5e9d784] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1102 13:14:47.038689  295944 system_pods.go:89] "snapshot-controller-7d9fbc56b8-v88xw" [95d508cf-77db-4d04-aed4-ee3059235c7f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1102 13:14:47.038702  295944 system_pods.go:89] "storage-provisioner" [d2ef88eb-9da4-47ad-b13b-231eb6b4242b] Running
	I1102 13:14:47.038718  295944 system_pods.go:126] duration metric: took 2.093464281s to wait for k8s-apps to be running ...
	I1102 13:14:47.038731  295944 system_svc.go:44] waiting for kubelet service to be running ....
	I1102 13:14:47.038793  295944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:14:47.054704  295944 system_svc.go:56] duration metric: took 15.953416ms WaitForService to wait for kubelet
	I1102 13:14:47.054743  295944 kubeadm.go:587] duration metric: took 44.806099673s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 13:14:47.054764  295944 node_conditions.go:102] verifying NodePressure condition ...
	I1102 13:14:47.058030  295944 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1102 13:14:47.058063  295944 node_conditions.go:123] node cpu capacity is 2
	I1102 13:14:47.058077  295944 node_conditions.go:105] duration metric: took 3.305914ms to run NodePressure ...
	I1102 13:14:47.058089  295944 start.go:242] waiting for startup goroutines ...
	I1102 13:14:47.085492  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:47.147449  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:47.147887  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:47.443134  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:47.585453  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:47.646681  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:47.648812  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:47.944035  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:48.085146  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:48.145912  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:48.146552  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:48.443887  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:48.585215  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:48.645656  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:48.646378  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:48.696636  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 13:14:48.943240  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:49.085946  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:49.144542  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:49.147374  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:49.443537  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:49.585917  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:49.647915  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:49.648898  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:49.706141  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.009417989s)
	W1102 13:14:49.706181  295944 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:49.706200  295944 retry.go:31] will retry after 18.837928504s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:14:49.943950  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:50.085755  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:50.145433  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:50.147535  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:50.443150  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:50.585076  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:50.645395  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:50.646053  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:50.943106  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:51.085602  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:51.147713  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:51.156074  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:51.442960  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:51.585087  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:51.646298  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:51.646573  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:51.946191  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:52.085854  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:52.145360  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:52.147404  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:52.444234  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:52.590493  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:52.647783  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:52.648225  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:52.946776  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:53.089565  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:53.151634  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:53.152962  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:53.445677  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:53.586910  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:53.648627  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:53.649377  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:53.944783  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:54.087376  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:54.147795  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:54.148312  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:54.444788  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:54.585990  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:54.687213  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:54.687217  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:54.958851  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:55.085907  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:55.146779  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:55.146999  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:55.445218  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:55.586269  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:55.646661  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:55.647091  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:55.943260  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:56.085668  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:56.146631  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:56.146885  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:56.443466  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:56.585687  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:56.646167  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:56.648457  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:56.943687  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:57.085933  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:57.144308  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:57.145165  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:57.443864  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:57.585197  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:57.645993  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:57.648367  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:57.944089  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:58.086043  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:58.145739  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:58.146213  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:58.444030  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:58.585257  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:58.645775  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:58.646839  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:58.943467  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:59.085381  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:59.146262  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:59.147787  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:59.444074  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:14:59.585422  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:14:59.647302  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:14:59.647867  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:14:59.943440  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:00.090318  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:00.148435  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:00.150753  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:00.451266  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:00.586190  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:00.657998  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:00.658389  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:00.948357  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:01.085964  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:01.147663  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:01.147828  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:01.444245  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:01.586152  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:01.648216  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:01.648955  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:01.944803  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:02.085667  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:02.146210  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:02.147781  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:02.443562  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:02.585524  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:02.645823  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:02.647243  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:02.943423  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:03.085986  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:03.144310  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:03.147418  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:03.446774  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:03.586760  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:03.644817  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:03.649650  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:03.944540  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:04.090192  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:04.147657  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:04.148059  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:04.443445  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:04.608632  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:04.649329  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:04.649794  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:04.944824  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:05.089511  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:05.154374  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:05.155231  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:05.443565  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:05.586536  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:05.646339  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:05.647845  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:05.944293  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:06.087049  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:06.201866  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:06.201995  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:06.443304  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:06.585855  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:06.644137  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:06.645107  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:06.948444  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:07.085778  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:07.145047  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:07.145695  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:07.443620  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:07.585467  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:07.644316  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:07.646568  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:07.944034  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:08.084929  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:08.145771  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:08.145952  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:08.446137  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:08.544273  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 13:15:08.586168  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:08.687424  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:08.687904  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:08.943112  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:09.086017  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:09.147272  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:09.147598  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:09.444973  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:09.574367  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.030048061s)
	W1102 13:15:09.574407  295944 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:15:09.574427  295944 retry.go:31] will retry after 28.801030851s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1102 13:15:09.586085  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:09.644729  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:09.645712  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:09.946120  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:10.086398  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:10.187545  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:10.187723  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:10.443882  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:10.586109  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:10.644771  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:10.646406  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:10.944153  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:11.086756  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:11.187403  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:11.187850  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:11.443907  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:11.585777  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:11.646250  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:11.646995  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:11.943281  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:12.089144  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:12.145950  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:12.146795  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:12.443643  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:12.585717  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:12.646298  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:12.647254  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:12.943557  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:13.085999  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:13.144776  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:13.146220  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:13.443460  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:13.585697  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:13.646883  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:13.647521  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:13.943841  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:14.086087  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:14.147272  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:14.148434  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:14.443776  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:14.585578  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:14.645174  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:14.646673  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:14.943901  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:15.085977  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:15.144615  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:15.148985  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:15.443750  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:15.586896  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:15.644949  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:15.646761  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:15.943951  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:16.085365  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:16.152092  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:16.152549  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:16.444204  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:16.585482  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:16.645874  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:16.647504  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:16.944198  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:17.089215  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:17.146547  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:17.146970  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:17.444164  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:17.586589  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:17.646736  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:17.646896  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:17.944061  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:18.085261  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:18.147911  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:18.150001  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:18.444171  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:18.585349  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:18.645430  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:18.646871  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:18.943011  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:19.085181  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:19.144314  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:19.147301  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:19.444413  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:19.586296  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:19.689896  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:19.690334  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:19.950208  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:20.086345  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:20.145262  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:20.147018  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:20.443610  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:20.585919  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:20.646369  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:20.651752  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:20.944615  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:21.086238  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:21.143987  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:21.145950  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:21.443252  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:21.585506  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:21.644747  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:21.645407  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:21.952370  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:22.087139  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:22.144078  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:22.144734  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:22.444257  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:22.585296  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:22.648145  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:22.648560  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:22.955917  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:23.085219  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:23.146419  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:23.148137  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:23.446177  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:23.585184  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:23.644523  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:23.646924  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:23.944137  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:24.085492  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:24.144695  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:24.147340  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:24.444368  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:24.585781  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1102 13:15:24.646460  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:24.647157  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:24.950972  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:25.085906  295944 kapi.go:107] duration metric: took 1m17.503911279s to wait for kubernetes.io/minikube-addons=registry ...
	I1102 13:15:25.144662  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:25.146953  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:25.444094  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:25.647205  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:25.647602  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:25.943927  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:26.144575  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:26.147205  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:26.443875  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:26.645229  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:26.646889  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:26.944172  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:27.144436  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:27.147842  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:27.443866  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:27.645346  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:27.647125  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:27.943820  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:28.144833  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:28.146876  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:28.444344  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:28.646752  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:28.646972  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:28.943320  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:29.146437  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:29.146601  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:29.444175  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:29.646516  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:29.647355  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:29.943247  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:30.145057  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:30.147220  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:30.445389  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:30.644599  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:30.646176  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:30.943227  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:31.145733  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:31.146787  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:31.444715  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:31.647363  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:31.648356  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:31.943463  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:32.147110  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:32.147301  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:32.450280  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:32.646071  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:32.646536  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:32.944012  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:33.145553  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:33.146336  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:33.443823  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:33.645571  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:33.645806  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:33.943416  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1102 13:15:34.146025  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:34.146260  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:34.443807  295944 kapi.go:107] duration metric: took 1m22.003713397s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1102 13:15:34.448204  295944 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-230560 cluster.
	I1102 13:15:34.452926  295944 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1102 13:15:34.457152  295944 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1102 13:15:34.645506  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:34.647159  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:35.145947  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:35.146128  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:35.651074  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:35.651272  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:36.145952  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:36.146342  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:36.646161  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:36.646373  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:37.146191  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:37.146832  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:37.645747  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:37.646519  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:38.145517  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:38.147568  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:38.375823  295944 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1102 13:15:38.646054  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:38.646538  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:39.146380  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:39.146691  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:39.397584  295944 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.021725083s)
	W1102 13:15:39.397617  295944 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1102 13:15:39.397702  295944 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1102 13:15:39.644493  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:39.645761  295944 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1102 13:15:40.146738  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:40.148233  295944 kapi.go:107] duration metric: took 1m31.005988888s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1102 13:15:40.645440  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:41.144167  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:41.644677  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:42.144491  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:42.644592  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:43.144231  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:43.644604  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:44.144188  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:44.645346  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:45.149758  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:45.644318  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:46.144003  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:46.654016  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:47.144798  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:47.643985  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:48.144815  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:48.645012  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:49.145120  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:49.645425  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:50.144747  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:50.645396  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:51.145003  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:51.645038  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:52.143987  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:52.645461  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:53.143713  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:53.643783  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:54.144138  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:54.644588  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:55.144402  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:55.644416  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:56.144313  295944 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1102 13:15:56.644358  295944 kapi.go:107] duration metric: took 1m48.003555043s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1102 13:15:56.647880  295944 out.go:179] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, registry-creds, storage-provisioner, storage-provisioner-rancher, cloud-spanner, ingress-dns, yakd, metrics-server, default-storageclass, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I1102 13:15:56.650818  295944 addons.go:515] duration metric: took 1m54.401733059s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin registry-creds storage-provisioner storage-provisioner-rancher cloud-spanner ingress-dns yakd metrics-server default-storageclass volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I1102 13:15:56.650879  295944 start.go:247] waiting for cluster config update ...
	I1102 13:15:56.650903  295944 start.go:256] writing updated cluster config ...
	I1102 13:15:56.651203  295944 ssh_runner.go:195] Run: rm -f paused
	I1102 13:15:56.656614  295944 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:15:56.660004  295944 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6rft9" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:15:56.664943  295944 pod_ready.go:94] pod "coredns-66bc5c9577-6rft9" is "Ready"
	I1102 13:15:56.664974  295944 pod_ready.go:86] duration metric: took 4.942922ms for pod "coredns-66bc5c9577-6rft9" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:15:56.667138  295944 pod_ready.go:83] waiting for pod "etcd-addons-230560" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:15:56.671608  295944 pod_ready.go:94] pod "etcd-addons-230560" is "Ready"
	I1102 13:15:56.671640  295944 pod_ready.go:86] duration metric: took 4.478268ms for pod "etcd-addons-230560" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:15:56.674041  295944 pod_ready.go:83] waiting for pod "kube-apiserver-addons-230560" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:15:56.678301  295944 pod_ready.go:94] pod "kube-apiserver-addons-230560" is "Ready"
	I1102 13:15:56.678326  295944 pod_ready.go:86] duration metric: took 4.258221ms for pod "kube-apiserver-addons-230560" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:15:56.680789  295944 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-230560" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:15:57.060979  295944 pod_ready.go:94] pod "kube-controller-manager-addons-230560" is "Ready"
	I1102 13:15:57.061012  295944 pod_ready.go:86] duration metric: took 380.165348ms for pod "kube-controller-manager-addons-230560" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:15:57.261541  295944 pod_ready.go:83] waiting for pod "kube-proxy-dzts7" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:15:57.660885  295944 pod_ready.go:94] pod "kube-proxy-dzts7" is "Ready"
	I1102 13:15:57.660917  295944 pod_ready.go:86] duration metric: took 399.349291ms for pod "kube-proxy-dzts7" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:15:57.861074  295944 pod_ready.go:83] waiting for pod "kube-scheduler-addons-230560" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:15:58.260419  295944 pod_ready.go:94] pod "kube-scheduler-addons-230560" is "Ready"
	I1102 13:15:58.260489  295944 pod_ready.go:86] duration metric: took 399.388209ms for pod "kube-scheduler-addons-230560" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:15:58.260509  295944 pod_ready.go:40] duration metric: took 1.603865354s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:15:58.314774  295944 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1102 13:15:58.319879  295944 out.go:179] * Done! kubectl is now configured to use "addons-230560" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 02 13:15:57 addons-230560 crio[833]: time="2025-11-02T13:15:57.49258718Z" level=info msg="Stopped pod sandbox (already stopped): 7c5fecfa197084a4cd2ab11b5b4a8f87a1eeeda5bbc49b10efbcb8778a93c261" id=b0d2b8b6-8602-4643-b5d2-eeb0295a6554 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 02 13:15:57 addons-230560 crio[833]: time="2025-11-02T13:15:57.493138185Z" level=info msg="Removing pod sandbox: 7c5fecfa197084a4cd2ab11b5b4a8f87a1eeeda5bbc49b10efbcb8778a93c261" id=9e989331-e8c8-409f-83fc-6c832f869142 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 02 13:15:57 addons-230560 crio[833]: time="2025-11-02T13:15:57.499533283Z" level=info msg="Removed pod sandbox: 7c5fecfa197084a4cd2ab11b5b4a8f87a1eeeda5bbc49b10efbcb8778a93c261" id=9e989331-e8c8-409f-83fc-6c832f869142 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 02 13:15:59 addons-230560 crio[833]: time="2025-11-02T13:15:59.738370061Z" level=info msg="Running pod sandbox: default/busybox/POD" id=246929bd-7502-4f66-9c4f-3cd0fa54d827 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 13:15:59 addons-230560 crio[833]: time="2025-11-02T13:15:59.738491284Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:15:59 addons-230560 crio[833]: time="2025-11-02T13:15:59.751197296Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:90cb1aa0f09d957bc19407c1a26ad41bca02a5cf7f0783dedcb5fe98ebc0ff13 UID:e44da318-9eb6-4f4c-971d-b08f91cec38e NetNS:/var/run/netns/874b3778-f15c-4ac3-82a4-13caf6ee3f9d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400146d230}] Aliases:map[]}"
	Nov 02 13:15:59 addons-230560 crio[833]: time="2025-11-02T13:15:59.751246584Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 02 13:15:59 addons-230560 crio[833]: time="2025-11-02T13:15:59.761370996Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:90cb1aa0f09d957bc19407c1a26ad41bca02a5cf7f0783dedcb5fe98ebc0ff13 UID:e44da318-9eb6-4f4c-971d-b08f91cec38e NetNS:/var/run/netns/874b3778-f15c-4ac3-82a4-13caf6ee3f9d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400146d230}] Aliases:map[]}"
	Nov 02 13:15:59 addons-230560 crio[833]: time="2025-11-02T13:15:59.761518854Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 02 13:15:59 addons-230560 crio[833]: time="2025-11-02T13:15:59.766471811Z" level=info msg="Ran pod sandbox 90cb1aa0f09d957bc19407c1a26ad41bca02a5cf7f0783dedcb5fe98ebc0ff13 with infra container: default/busybox/POD" id=246929bd-7502-4f66-9c4f-3cd0fa54d827 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 13:15:59 addons-230560 crio[833]: time="2025-11-02T13:15:59.76958885Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ae042654-6f8a-4a84-ac88-1e32d05a4dd2 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:15:59 addons-230560 crio[833]: time="2025-11-02T13:15:59.769864964Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=ae042654-6f8a-4a84-ac88-1e32d05a4dd2 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:15:59 addons-230560 crio[833]: time="2025-11-02T13:15:59.76998545Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=ae042654-6f8a-4a84-ac88-1e32d05a4dd2 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:15:59 addons-230560 crio[833]: time="2025-11-02T13:15:59.770857681Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7c97f120-db7e-49c4-9b14-b118adb917fb name=/runtime.v1.ImageService/PullImage
	Nov 02 13:15:59 addons-230560 crio[833]: time="2025-11-02T13:15:59.773747378Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 02 13:16:01 addons-230560 crio[833]: time="2025-11-02T13:16:01.897388744Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=7c97f120-db7e-49c4-9b14-b118adb917fb name=/runtime.v1.ImageService/PullImage
	Nov 02 13:16:01 addons-230560 crio[833]: time="2025-11-02T13:16:01.898694293Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4780d83a-39a7-4a79-a24c-b44f920b359b name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:16:01 addons-230560 crio[833]: time="2025-11-02T13:16:01.901311232Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0ce7019f-9357-4c74-961f-29f07a91078f name=/runtime.v1.ImageService/ImageStatus
	Nov 02 13:16:01 addons-230560 crio[833]: time="2025-11-02T13:16:01.910593616Z" level=info msg="Creating container: default/busybox/busybox" id=54f5e0e9-a5e2-4277-93d6-6b19f2e646ab name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:16:01 addons-230560 crio[833]: time="2025-11-02T13:16:01.910756481Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:16:01 addons-230560 crio[833]: time="2025-11-02T13:16:01.929140791Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:16:01 addons-230560 crio[833]: time="2025-11-02T13:16:01.929710381Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 13:16:01 addons-230560 crio[833]: time="2025-11-02T13:16:01.949351213Z" level=info msg="Created container 4b53de0687394db3703572f09142a04572cfa4e04baf6c9fef1aa45de8a2da6c: default/busybox/busybox" id=54f5e0e9-a5e2-4277-93d6-6b19f2e646ab name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 13:16:01 addons-230560 crio[833]: time="2025-11-02T13:16:01.952038521Z" level=info msg="Starting container: 4b53de0687394db3703572f09142a04572cfa4e04baf6c9fef1aa45de8a2da6c" id=ec433340-4b82-496c-a715-68090c87bf28 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 13:16:01 addons-230560 crio[833]: time="2025-11-02T13:16:01.955547508Z" level=info msg="Started container" PID=5108 containerID=4b53de0687394db3703572f09142a04572cfa4e04baf6c9fef1aa45de8a2da6c description=default/busybox/busybox id=ec433340-4b82-496c-a715-68090c87bf28 name=/runtime.v1.RuntimeService/StartContainer sandboxID=90cb1aa0f09d957bc19407c1a26ad41bca02a5cf7f0783dedcb5fe98ebc0ff13
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	4b53de0687394       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          7 seconds ago        Running             busybox                                  0                   90cb1aa0f09d9       busybox                                     default
	a7ffc634ec21a       registry.k8s.io/ingress-nginx/controller@sha256:4ae52268a9493fc308d5f2fb67fe657d2499293aa644122d385ddb60c2330dbc                             14 seconds ago       Running             controller                               0                   d0cd0f6b41b44       ingress-nginx-controller-675c5ddd98-vlthl   ingress-nginx
	59d3d49e880a7       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          30 seconds ago       Running             csi-snapshotter                          0                   f9437ce3a1988       csi-hostpathplugin-gnxtb                    kube-system
	495997c964878       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          32 seconds ago       Running             csi-provisioner                          0                   f9437ce3a1988       csi-hostpathplugin-gnxtb                    kube-system
	92ee410347c1f       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            33 seconds ago       Running             liveness-probe                           0                   f9437ce3a1988       csi-hostpathplugin-gnxtb                    kube-system
	0fc09c9a2e59c       9a80c0c8eb61cb88536fa58caaf18357fffd3e9fd0481b2781dfc6359f7654c9                                                                             34 seconds ago       Exited              patch                                    2                   fcc604782c6f5       ingress-nginx-admission-patch-qdswx         ingress-nginx
	14fe9bc0e4e3f       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           34 seconds ago       Running             hostpath                                 0                   f9437ce3a1988       csi-hostpathplugin-gnxtb                    kube-system
	a232385e88d3d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 36 seconds ago       Running             gcp-auth                                 0                   3d1cd92e75890       gcp-auth-78565c9fb4-t4725                   gcp-auth
	990b6d45c69f1       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:f279436ecca5b88c20fd93c0d2a668ace136058ecad987e96e26014585e335b4                            39 seconds ago       Running             gadget                                   0                   10b781175e4f6       gadget-dv9jw                                gadget
	849382b87b03a       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                43 seconds ago       Running             node-driver-registrar                    0                   f9437ce3a1988       csi-hostpathplugin-gnxtb                    kube-system
	2c92ca6fd79f7       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:2d5727fcf5b9ee2bd367835234500c1ec7f54a0b94ea92a76169a9308a197e93                   44 seconds ago       Exited              create                                   0                   17f2fc0ac69a4       ingress-nginx-admission-create-nh5wk        ingress-nginx
	de7641522a905       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              44 seconds ago       Running             registry-proxy                           0                   6a896cce9b5c5       registry-proxy-gk6xb                        kube-system
	ce226e80e176f       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              48 seconds ago       Running             csi-resizer                              0                   f90af80a65b61       csi-hostpath-resizer-0                      kube-system
	a0486cd1530aa       gcr.io/cloud-spanner-emulator/emulator@sha256:c2688dc4b7ecb4546084321d63c2b3b616a54263488137e18fcb7c7005aef086                               50 seconds ago       Running             cloud-spanner-emulator                   0                   e4fe152b29c45       cloud-spanner-emulator-86bd5cbb97-5rtv5     default
	43495555e2c69       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     54 seconds ago       Running             nvidia-device-plugin-ctr                 0                   29b8acc6b965d       nvidia-device-plugin-daemonset-qkqx4        kube-system
	23d26c5efd413       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      59 seconds ago       Running             volume-snapshot-controller               0                   a62801280fa19       snapshot-controller-7d9fbc56b8-rrmp6        kube-system
	f4000d22ba555       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   59 seconds ago       Running             csi-external-health-monitor-controller   0                   f9437ce3a1988       csi-hostpathplugin-gnxtb                    kube-system
	571d698a41a0b       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             About a minute ago   Running             csi-attacher                             0                   005cb3bb5e38f       csi-hostpath-attacher-0                     kube-system
	1d9d1f4432586       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   3d73d04b7ba5d       local-path-provisioner-648f6765c9-cbq27     local-path-storage
	b05b32f995002       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   d09cf2b6c9539       snapshot-controller-7d9fbc56b8-v88xw        kube-system
	ece119ee391be       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               About a minute ago   Running             minikube-ingress-dns                     0                   5e3feebe82373       kube-ingress-dns-minikube                   kube-system
	7d130b18d8ef1       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   bf6481a68caa8       metrics-server-85b7d694d7-npk5l             kube-system
	01cc86f91cc93       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           About a minute ago   Running             registry                                 0                   3f6d1dc045b68       registry-6b586f9694-qlm8d                   kube-system
	2e9e9c9def04e       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   03ab2151ad8a1       yakd-dashboard-5ff678cb9-j4lzk              yakd-dashboard
	7c311915f4fbc       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   e9c1284e69898       storage-provisioner                         kube-system
	2d7e91ed3fc10       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   3a1d5b80f2ad2       coredns-66bc5c9577-6rft9                    kube-system
	b8f72f36b8b68       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                                                             2 minutes ago        Running             kube-proxy                               0                   5dcbfbf0bd68f       kube-proxy-dzts7                            kube-system
	7c3129e8902e2       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             2 minutes ago        Running             kindnet-cni                              0                   53336ad664753       kindnet-5dpxs                               kube-system
	ba2b8cd401ace       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                                                             2 minutes ago        Running             etcd                                     0                   d41f150d013d2       etcd-addons-230560                          kube-system
	ae6a81713fca4       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                                                             2 minutes ago        Running             kube-apiserver                           0                   ac4d436091f61       kube-apiserver-addons-230560                kube-system
	e520da42d44ee       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                                                             2 minutes ago        Running             kube-scheduler                           0                   0de86bf9d5e03       kube-scheduler-addons-230560                kube-system
	47bfba99e6f29       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                                                             2 minutes ago        Running             kube-controller-manager                  0                   7d0e91007728a       kube-controller-manager-addons-230560       kube-system
	
	
	==> coredns [2d7e91ed3fc10a735909e92c3d70b5422345ba649e0f465bf27dbb923af7877c] <==
	[INFO] 10.244.0.12:53840 - 54293 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000106094s
	[INFO] 10.244.0.12:53840 - 10597 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.004530979s
	[INFO] 10.244.0.12:53840 - 32280 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.004736002s
	[INFO] 10.244.0.12:53840 - 1617 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.00018109s
	[INFO] 10.244.0.12:53840 - 37667 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000416629s
	[INFO] 10.244.0.12:47877 - 38611 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00017551s
	[INFO] 10.244.0.12:47877 - 37216 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000118163s
	[INFO] 10.244.0.12:53193 - 56203 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000121519s
	[INFO] 10.244.0.12:53193 - 56400 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000156556s
	[INFO] 10.244.0.12:46803 - 44052 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000082659s
	[INFO] 10.244.0.12:46803 - 43855 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000179555s
	[INFO] 10.244.0.12:52409 - 5191 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001257573s
	[INFO] 10.244.0.12:52409 - 5011 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001502342s
	[INFO] 10.244.0.12:48444 - 58470 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000156662s
	[INFO] 10.244.0.12:48444 - 58306 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00014332s
	[INFO] 10.244.0.20:44299 - 14091 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000174984s
	[INFO] 10.244.0.20:46874 - 43093 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000154234s
	[INFO] 10.244.0.20:44232 - 5634 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000159854s
	[INFO] 10.244.0.20:51630 - 28040 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000112395s
	[INFO] 10.244.0.20:56375 - 8314 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000230796s
	[INFO] 10.244.0.20:57702 - 24696 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000371508s
	[INFO] 10.244.0.20:39104 - 13160 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002011788s
	[INFO] 10.244.0.20:36510 - 41422 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002167424s
	[INFO] 10.244.0.20:53921 - 23783 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000734482s
	[INFO] 10.244.0.20:44992 - 41293 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001514068s
	
	
	==> describe nodes <==
	Name:               addons-230560
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-230560
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a
	                    minikube.k8s.io/name=addons-230560
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_02T13_13_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-230560
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-230560"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 02 Nov 2025 13:13:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-230560
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 02 Nov 2025 13:16:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 02 Nov 2025 13:15:39 +0000   Sun, 02 Nov 2025 13:13:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 02 Nov 2025 13:15:39 +0000   Sun, 02 Nov 2025 13:13:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 02 Nov 2025 13:15:39 +0000   Sun, 02 Nov 2025 13:13:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 02 Nov 2025 13:15:39 +0000   Sun, 02 Nov 2025 13:14:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-230560
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                c6afd72e-c193-43eb-ae12-e791b22211d1
	  Boot ID:                    4c303cf4-c0b6-4c0d-aa1a-d7509feb15e7
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     cloud-spanner-emulator-86bd5cbb97-5rtv5      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  gadget                      gadget-dv9jw                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  gcp-auth                    gcp-auth-78565c9fb4-t4725                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-vlthl    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         2m2s
	  kube-system                 coredns-66bc5c9577-6rft9                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m8s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 csi-hostpathplugin-gnxtb                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 etcd-addons-230560                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m14s
	  kube-system                 kindnet-5dpxs                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m8s
	  kube-system                 kube-apiserver-addons-230560                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 kube-controller-manager-addons-230560        200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-proxy-dzts7                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 kube-scheduler-addons-230560                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m14s
	  kube-system                 metrics-server-85b7d694d7-npk5l              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         2m3s
	  kube-system                 nvidia-device-plugin-daemonset-qkqx4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 registry-6b586f9694-qlm8d                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 registry-creds-764b6fb674-5ssmw              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 registry-proxy-gk6xb                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kube-system                 snapshot-controller-7d9fbc56b8-rrmp6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 snapshot-controller-7d9fbc56b8-v88xw         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  local-path-storage          local-path-provisioner-648f6765c9-cbq27      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m4s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-j4lzk               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     2m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m5s                   kube-proxy       
	  Normal   Starting                 2m21s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m21s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m21s (x8 over 2m21s)  kubelet          Node addons-230560 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m21s (x8 over 2m21s)  kubelet          Node addons-230560 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m21s (x8 over 2m21s)  kubelet          Node addons-230560 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m13s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m13s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m13s                  kubelet          Node addons-230560 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m13s                  kubelet          Node addons-230560 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m13s                  kubelet          Node addons-230560 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m9s                   node-controller  Node addons-230560 event: Registered Node addons-230560 in Controller
	  Normal   NodeReady                86s                    kubelet          Node addons-230560 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 2 11:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015966] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510742] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034359] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.787410] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.238409] kauditd_printk_skb: 36 callbacks suppressed
	[Nov 2 13:12] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 2 13:13] overlayfs: idmapped layers are currently not supported
	[  +0.073328] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [ba2b8cd401ace9132335713de0f6619fc89d02ed1a60281902f918001c3a9bc6] <==
	{"level":"warn","ts":"2025-11-02T13:13:51.923237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:13:51.943057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:13:51.956695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:13:51.979424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:13:51.991279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:13:52.011130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:13:52.027886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:13:52.055807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:13:52.089339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:13:52.097367Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:13:52.117127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:13:52.127048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:13:52.139191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:13:52.155616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:13:52.178667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:13:52.207749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:13:52.227352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:13:52.238281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:13:52.339961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:14:09.442896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:14:09.467319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:14:31.395866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:14:31.411267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:14:31.445340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:14:31.467260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43428","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [a232385e88d3dfaef5c28fdea88dc774a28d1ba0f9e3dbbe8e2c650b2c532943] <==
	2025/11/02 13:15:33 GCP Auth Webhook started!
	2025/11/02 13:15:59 Ready to marshal response ...
	2025/11/02 13:15:59 Ready to write response ...
	2025/11/02 13:15:59 Ready to marshal response ...
	2025/11/02 13:15:59 Ready to write response ...
	2025/11/02 13:15:59 Ready to marshal response ...
	2025/11/02 13:15:59 Ready to write response ...
	
	
	==> kernel <==
	 13:16:10 up  1:58,  0 user,  load average: 2.40, 3.27, 3.51
	Linux addons-230560 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7c3129e8902e2ba546ec94fea95b907a80a88b9f19819ccc547d8e7cd7ddae43] <==
	E1102 13:14:34.313608       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1102 13:14:34.313738       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1102 13:14:34.314963       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1102 13:14:34.314964       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1102 13:14:35.614833       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1102 13:14:35.614863       1 metrics.go:72] Registering metrics
	I1102 13:14:35.614931       1 controller.go:711] "Syncing nftables rules"
	I1102 13:14:44.320788       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:14:44.320845       1 main.go:301] handling current node
	I1102 13:14:54.314855       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:14:54.314922       1 main.go:301] handling current node
	I1102 13:15:04.313713       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:15:04.313788       1 main.go:301] handling current node
	I1102 13:15:14.314672       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:15:14.314725       1 main.go:301] handling current node
	I1102 13:15:24.313165       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:15:24.313195       1 main.go:301] handling current node
	I1102 13:15:34.313280       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:15:34.313352       1 main.go:301] handling current node
	I1102 13:15:44.315948       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:15:44.315985       1 main.go:301] handling current node
	I1102 13:15:54.320268       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:15:54.320299       1 main.go:301] handling current node
	I1102 13:16:04.312726       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:16:04.312760       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ae6a81713fca42870850a9a5e0a86e40858cbf49ccdf8f4b701bb7c58d5b250d] <==
	I1102 13:14:09.040941       1 alloc.go:328] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.106.119.11"}
	W1102 13:14:09.442299       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1102 13:14:09.461814       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1102 13:14:12.259039       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.104.194.249"}
	W1102 13:14:31.395864       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1102 13:14:31.410827       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1102 13:14:31.445345       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1102 13:14:31.460912       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1102 13:14:44.406933       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.194.249:443: connect: connection refused
	E1102 13:14:44.407064       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.194.249:443: connect: connection refused" logger="UnhandledError"
	W1102 13:14:44.407531       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.194.249:443: connect: connection refused
	E1102 13:14:44.407624       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.194.249:443: connect: connection refused" logger="UnhandledError"
	W1102 13:14:44.513139       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.104.194.249:443: connect: connection refused
	E1102 13:14:44.513184       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.104.194.249:443: connect: connection refused" logger="UnhandledError"
	E1102 13:14:54.841019       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.62.117:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.62.117:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.62.117:443: connect: connection refused" logger="UnhandledError"
	W1102 13:14:54.841387       1 handler_proxy.go:99] no RequestInfo found in the context
	E1102 13:14:54.841449       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1102 13:14:54.841995       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.62.117:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.62.117:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.62.117:443: connect: connection refused" logger="UnhandledError"
	E1102 13:14:54.847534       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.62.117:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.111.62.117:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.111.62.117:443: connect: connection refused" logger="UnhandledError"
	I1102 13:14:54.970224       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1102 13:16:07.644356       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:56454: use of closed network connection
	E1102 13:16:08.044761       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:56492: use of closed network connection
	
	
	==> kube-controller-manager [47bfba99e6f299e3b3448bc8864faaedc77b8f94e548ef086dc4f5981ae0360a] <==
	I1102 13:14:01.384560       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1102 13:14:01.396121       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1102 13:14:01.398518       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1102 13:14:01.398922       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1102 13:14:01.414690       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1102 13:14:01.414920       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1102 13:14:01.420514       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 13:14:01.420594       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1102 13:14:01.420603       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1102 13:14:01.426893       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 13:14:01.428085       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1102 13:14:01.428281       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1102 13:14:01.429588       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1102 13:14:01.429680       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1102 13:14:01.429692       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1102 13:14:01.429702       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	E1102 13:14:07.053541       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1102 13:14:31.387807       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1102 13:14:31.388133       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1102 13:14:31.388231       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1102 13:14:31.434526       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1102 13:14:31.438841       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1102 13:14:31.488841       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 13:14:31.539729       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 13:14:46.411335       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [b8f72f36b8b681e6188a6ae20fbb9399b5a1bba3a9e3fa05f0101b5f7bd14aac] <==
	I1102 13:14:04.381713       1 server_linux.go:53] "Using iptables proxy"
	I1102 13:14:04.468205       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1102 13:14:04.569047       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1102 13:14:04.569097       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1102 13:14:04.569181       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1102 13:14:04.621259       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 13:14:04.621314       1 server_linux.go:132] "Using iptables Proxier"
	I1102 13:14:04.639713       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1102 13:14:04.640046       1 server.go:527] "Version info" version="v1.34.1"
	I1102 13:14:04.640059       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 13:14:04.641607       1 config.go:200] "Starting service config controller"
	I1102 13:14:04.641617       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1102 13:14:04.641633       1 config.go:106] "Starting endpoint slice config controller"
	I1102 13:14:04.641637       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1102 13:14:04.641648       1 config.go:403] "Starting serviceCIDR config controller"
	I1102 13:14:04.641652       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1102 13:14:04.642234       1 config.go:309] "Starting node config controller"
	I1102 13:14:04.642241       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1102 13:14:04.642247       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1102 13:14:04.741767       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1102 13:14:04.741811       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1102 13:14:04.741843       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e520da42d44eee8e7e351ea85bd1e8a1fec19b3c33ded4f2a1188baef7b927e3] <==
	I1102 13:13:53.232881       1 serving.go:386] Generated self-signed cert in-memory
	I1102 13:13:56.617485       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1102 13:13:56.617586       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 13:13:56.622352       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1102 13:13:56.622462       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1102 13:13:56.622543       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 13:13:56.622581       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 13:13:56.622662       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1102 13:13:56.622694       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1102 13:13:56.622877       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1102 13:13:56.622950       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1102 13:13:56.724277       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1102 13:13:56.724348       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1102 13:13:56.724449       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 02 13:15:31 addons-230560 kubelet[1285]: I1102 13:15:31.011323    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-dv9jw" podStartSLOduration=66.748552176 podStartE2EDuration="1m24.011302456s" podCreationTimestamp="2025-11-02 13:14:07 +0000 UTC" firstStartedPulling="2025-11-02 13:15:12.931800712 +0000 UTC m=+75.669948535" lastFinishedPulling="2025-11-02 13:15:30.194550983 +0000 UTC m=+92.932698815" observedRunningTime="2025-11-02 13:15:31.010337539 +0000 UTC m=+93.748485387" watchObservedRunningTime="2025-11-02 13:15:31.011302456 +0000 UTC m=+93.749450280"
	Nov 02 13:15:35 addons-230560 kubelet[1285]: I1102 13:15:35.374330    1285 scope.go:117] "RemoveContainer" containerID="5d535b520d4636014a073918cf641bbda01c8ddd336f684ccd1617a9b0f0e21e"
	Nov 02 13:15:35 addons-230560 kubelet[1285]: I1102 13:15:35.593854    1285 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Nov 02 13:15:35 addons-230560 kubelet[1285]: I1102 13:15:35.593906    1285 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Nov 02 13:15:36 addons-230560 kubelet[1285]: I1102 13:15:36.022446    1285 scope.go:117] "RemoveContainer" containerID="5d535b520d4636014a073918cf641bbda01c8ddd336f684ccd1617a9b0f0e21e"
	Nov 02 13:15:36 addons-230560 kubelet[1285]: I1102 13:15:36.073845    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-t4725" podStartSLOduration=67.34995398 podStartE2EDuration="1m24.073823087s" podCreationTimestamp="2025-11-02 13:14:12 +0000 UTC" firstStartedPulling="2025-11-02 13:15:16.939279477 +0000 UTC m=+79.677427309" lastFinishedPulling="2025-11-02 13:15:33.663148584 +0000 UTC m=+96.401296416" observedRunningTime="2025-11-02 13:15:34.024129066 +0000 UTC m=+96.762276890" watchObservedRunningTime="2025-11-02 13:15:36.073823087 +0000 UTC m=+98.811970910"
	Nov 02 13:15:37 addons-230560 kubelet[1285]: I1102 13:15:37.378083    1285 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a9b8a00-4ab0-41f9-97e4-3a32fb14a481" path="/var/lib/kubelet/pods/4a9b8a00-4ab0-41f9-97e4-3a32fb14a481/volumes"
	Nov 02 13:15:37 addons-230560 kubelet[1285]: I1102 13:15:37.425479    1285 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wktld\" (UniqueName: \"kubernetes.io/projected/6485ce1e-02e0-4b79-bae5-175dec2e9566-kube-api-access-wktld\") pod \"6485ce1e-02e0-4b79-bae5-175dec2e9566\" (UID: \"6485ce1e-02e0-4b79-bae5-175dec2e9566\") "
	Nov 02 13:15:37 addons-230560 kubelet[1285]: I1102 13:15:37.434460    1285 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6485ce1e-02e0-4b79-bae5-175dec2e9566-kube-api-access-wktld" (OuterVolumeSpecName: "kube-api-access-wktld") pod "6485ce1e-02e0-4b79-bae5-175dec2e9566" (UID: "6485ce1e-02e0-4b79-bae5-175dec2e9566"). InnerVolumeSpecName "kube-api-access-wktld". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 02 13:15:37 addons-230560 kubelet[1285]: I1102 13:15:37.528104    1285 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wktld\" (UniqueName: \"kubernetes.io/projected/6485ce1e-02e0-4b79-bae5-175dec2e9566-kube-api-access-wktld\") on node \"addons-230560\" DevicePath \"\""
	Nov 02 13:15:38 addons-230560 kubelet[1285]: I1102 13:15:38.043140    1285 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fcc604782c6f5c9b1c6e31ec65796d098769615c686c997f1888c9ec73a39ee8"
	Nov 02 13:15:40 addons-230560 kubelet[1285]: I1102 13:15:40.084878    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-gnxtb" podStartSLOduration=2.819706865 podStartE2EDuration="56.084845901s" podCreationTimestamp="2025-11-02 13:14:44 +0000 UTC" firstStartedPulling="2025-11-02 13:14:45.783776578 +0000 UTC m=+48.521924401" lastFinishedPulling="2025-11-02 13:15:39.048915613 +0000 UTC m=+101.787063437" observedRunningTime="2025-11-02 13:15:40.084600228 +0000 UTC m=+102.822748151" watchObservedRunningTime="2025-11-02 13:15:40.084845901 +0000 UTC m=+102.822993733"
	Nov 02 13:15:41 addons-230560 kubelet[1285]: I1102 13:15:41.376410    1285 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f254a1a5-39b8-4917-82ee-7d15b445b398" path="/var/lib/kubelet/pods/f254a1a5-39b8-4917-82ee-7d15b445b398/volumes"
	Nov 02 13:15:48 addons-230560 kubelet[1285]: E1102 13:15:48.431029    1285 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Nov 02 13:15:48 addons-230560 kubelet[1285]: E1102 13:15:48.431630    1285 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad1063de-4f14-47bb-a909-fea786b4406a-gcr-creds podName:ad1063de-4f14-47bb-a909-fea786b4406a nodeName:}" failed. No retries permitted until 2025-11-02 13:16:52.431608124 +0000 UTC m=+175.169755948 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/ad1063de-4f14-47bb-a909-fea786b4406a-gcr-creds") pod "registry-creds-764b6fb674-5ssmw" (UID: "ad1063de-4f14-47bb-a909-fea786b4406a") : secret "registry-creds-gcr" not found
	Nov 02 13:15:56 addons-230560 kubelet[1285]: I1102 13:15:56.157878    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-vlthl" podStartSLOduration=101.706640535 podStartE2EDuration="1m48.157860077s" podCreationTimestamp="2025-11-02 13:14:08 +0000 UTC" firstStartedPulling="2025-11-02 13:15:48.702202906 +0000 UTC m=+111.440350730" lastFinishedPulling="2025-11-02 13:15:55.153422449 +0000 UTC m=+117.891570272" observedRunningTime="2025-11-02 13:15:56.156296523 +0000 UTC m=+118.894444355" watchObservedRunningTime="2025-11-02 13:15:56.157860077 +0000 UTC m=+118.896007909"
	Nov 02 13:15:57 addons-230560 kubelet[1285]: I1102 13:15:57.428174    1285 scope.go:117] "RemoveContainer" containerID="d4044197c1ad9302e0f3e194abc53ff3a34891a362f5fb9ea88baebb951a49b3"
	Nov 02 13:15:57 addons-230560 kubelet[1285]: I1102 13:15:57.443536    1285 scope.go:117] "RemoveContainer" containerID="9a2e3086193bb0660c2b0e0415e733db4a1cd0765424f34b8279d8fa33063f57"
	Nov 02 13:15:57 addons-230560 kubelet[1285]: E1102 13:15:57.534818    1285 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/9a95b178bc3b2bc79cebcb369a84d8deb4f9da58bb52140c9cf3be50517cb8d6/diff" to get inode usage: stat /var/lib/containers/storage/overlay/9a95b178bc3b2bc79cebcb369a84d8deb4f9da58bb52140c9cf3be50517cb8d6/diff: no such file or directory, extraDiskErr: <nil>
	Nov 02 13:15:57 addons-230560 kubelet[1285]: E1102 13:15:57.558014    1285 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/b7b1e8412bc13cad1f4f3e8e9c7516c0099ae64402993eee09f1721dab6569f8/diff" to get inode usage: stat /var/lib/containers/storage/overlay/b7b1e8412bc13cad1f4f3e8e9c7516c0099ae64402993eee09f1721dab6569f8/diff: no such file or directory, extraDiskErr: <nil>
	Nov 02 13:15:57 addons-230560 kubelet[1285]: E1102 13:15:57.567230    1285 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/aa809b2048c01f73caa16161fca4f782aa129c1a02020c28b86398df2e83583a/diff" to get inode usage: stat /var/lib/containers/storage/overlay/aa809b2048c01f73caa16161fca4f782aa129c1a02020c28b86398df2e83583a/diff: no such file or directory, extraDiskErr: could not stat "/var/log/pods/gcp-auth_gcp-auth-certs-patch-8gtct_f254a1a5-39b8-4917-82ee-7d15b445b398/patch/1.log" to get inode usage: stat /var/log/pods/gcp-auth_gcp-auth-certs-patch-8gtct_f254a1a5-39b8-4917-82ee-7d15b445b398/patch/1.log: no such file or directory
	Nov 02 13:15:59 addons-230560 kubelet[1285]: I1102 13:15:59.528026    1285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdt92\" (UniqueName: \"kubernetes.io/projected/e44da318-9eb6-4f4c-971d-b08f91cec38e-kube-api-access-cdt92\") pod \"busybox\" (UID: \"e44da318-9eb6-4f4c-971d-b08f91cec38e\") " pod="default/busybox"
	Nov 02 13:15:59 addons-230560 kubelet[1285]: I1102 13:15:59.528095    1285 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e44da318-9eb6-4f4c-971d-b08f91cec38e-gcp-creds\") pod \"busybox\" (UID: \"e44da318-9eb6-4f4c-971d-b08f91cec38e\") " pod="default/busybox"
	Nov 02 13:16:01 addons-230560 kubelet[1285]: I1102 13:16:01.374423    1285 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-qlm8d" secret="" err="secret \"gcp-auth\" not found"
	Nov 02 13:16:07 addons-230560 kubelet[1285]: I1102 13:16:07.163778    1285 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=6.034326745 podStartE2EDuration="8.163746496s" podCreationTimestamp="2025-11-02 13:15:59 +0000 UTC" firstStartedPulling="2025-11-02 13:15:59.77048724 +0000 UTC m=+122.508635064" lastFinishedPulling="2025-11-02 13:16:01.899906991 +0000 UTC m=+124.638054815" observedRunningTime="2025-11-02 13:16:02.199422836 +0000 UTC m=+124.937570668" watchObservedRunningTime="2025-11-02 13:16:07.163746496 +0000 UTC m=+129.901894328"
	
	
	==> storage-provisioner [7c311915f4fbc67516c8e9c0534f2b294964f9597b308ef3f1372ad8d0e1b2d5] <==
	W1102 13:15:44.282913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:15:46.286249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:15:46.290957       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:15:48.293767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:15:48.306850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:15:50.311179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:15:50.318236       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:15:52.320910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:15:52.325424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:15:54.328212       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:15:54.336021       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:15:56.339302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:15:56.343871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:15:58.348369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:15:58.359109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:16:00.364997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:16:00.372905       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:16:02.375949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:16:02.381935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:16:04.385487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:16:04.390392       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:16:06.397300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:16:06.404671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:16:08.409349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:16:08.414700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-230560 -n addons-230560
helpers_test.go:269: (dbg) Run:  kubectl --context addons-230560 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-nh5wk ingress-nginx-admission-patch-qdswx registry-creds-764b6fb674-5ssmw
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-230560 describe pod ingress-nginx-admission-create-nh5wk ingress-nginx-admission-patch-qdswx registry-creds-764b6fb674-5ssmw
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-230560 describe pod ingress-nginx-admission-create-nh5wk ingress-nginx-admission-patch-qdswx registry-creds-764b6fb674-5ssmw: exit status 1 (92.545134ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-nh5wk" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-qdswx" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-5ssmw" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-230560 describe pod ingress-nginx-admission-create-nh5wk ingress-nginx-admission-patch-qdswx registry-creds-764b6fb674-5ssmw: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-230560 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-230560 addons disable headlamp --alsologtostderr -v=1: exit status 11 (267.06489ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 13:16:11.268049  302666 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:16:11.268823  302666 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:16:11.268837  302666 out.go:374] Setting ErrFile to fd 2...
	I1102 13:16:11.268842  302666 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:16:11.270077  302666 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 13:16:11.270390  302666 mustload.go:66] Loading cluster: addons-230560
	I1102 13:16:11.270808  302666 config.go:182] Loaded profile config "addons-230560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:16:11.270828  302666 addons.go:607] checking whether the cluster is paused
	I1102 13:16:11.270933  302666 config.go:182] Loaded profile config "addons-230560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:16:11.270978  302666 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:16:11.271431  302666 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:16:11.294072  302666 ssh_runner.go:195] Run: systemctl --version
	I1102 13:16:11.294130  302666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:16:11.317874  302666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:16:11.421399  302666 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:16:11.421478  302666 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:16:11.454497  302666 cri.go:89] found id: "59d3d49e880a7e09fb4d9be850df44733d6d5116185f5d62db1b5c126b574e0b"
	I1102 13:16:11.454568  302666 cri.go:89] found id: "495997c964878856129fa01c98380dbe27e0d3e3399d552d965363043c0ed285"
	I1102 13:16:11.454590  302666 cri.go:89] found id: "92ee410347c1fecfc99fa6c734d7ea23c7a537dc02964ee119f8cc717fcef3e2"
	I1102 13:16:11.454642  302666 cri.go:89] found id: "14fe9bc0e4e3fca54005005e2faed708854fa4e45837404cf9bd640d6b5e2de6"
	I1102 13:16:11.454660  302666 cri.go:89] found id: "849382b87b03aa7df7b3bd0d7677466f19027eeb542e35e25286f1e8249c940e"
	I1102 13:16:11.454689  302666 cri.go:89] found id: "de7641522a90557a5bf20f6e7fc608045762d4951eef39028dd344fa1ec0e246"
	I1102 13:16:11.454698  302666 cri.go:89] found id: "ce226e80e176fd107a1fd4e99d0423900d376d659984557fa242d51fe29175f6"
	I1102 13:16:11.454702  302666 cri.go:89] found id: "43495555e2c69ab9b146d21dd528f268dcc6b5277bef46a2cdd8aac98ed01981"
	I1102 13:16:11.454706  302666 cri.go:89] found id: "23d26c5efd413a919fa01dc11c652b236e497eb2943a1a1cfaf21109a227fdf8"
	I1102 13:16:11.454721  302666 cri.go:89] found id: "f4000d22ba555b95620554ea649b6b0e65ff2c8de55597628a09a4936558b721"
	I1102 13:16:11.454728  302666 cri.go:89] found id: "571d698a41a0bf933525b4655374feb95afed1edb2640617ab7511cce65f0776"
	I1102 13:16:11.454732  302666 cri.go:89] found id: "b05b32f995002607af838c0a5ffed270958eaf8c7f841b88122803f35d8d2015"
	I1102 13:16:11.454735  302666 cri.go:89] found id: "ece119ee391be38c1a4f223d48708f601e4910a7734c54cbe59f4c38812974b5"
	I1102 13:16:11.454738  302666 cri.go:89] found id: "7d130b18d8ef12edee3e0d7b593a71e0c4b5690b982edfbbf83860e1b5d40c73"
	I1102 13:16:11.454741  302666 cri.go:89] found id: "01cc86f91cc933f1117d93925d4304fd9b0729b04f70bdfda8a3027baef7c8e9"
	I1102 13:16:11.454746  302666 cri.go:89] found id: "7c311915f4fbc67516c8e9c0534f2b294964f9597b308ef3f1372ad8d0e1b2d5"
	I1102 13:16:11.454758  302666 cri.go:89] found id: "2d7e91ed3fc10a735909e92c3d70b5422345ba649e0f465bf27dbb923af7877c"
	I1102 13:16:11.454762  302666 cri.go:89] found id: "b8f72f36b8b681e6188a6ae20fbb9399b5a1bba3a9e3fa05f0101b5f7bd14aac"
	I1102 13:16:11.454765  302666 cri.go:89] found id: "7c3129e8902e2ba546ec94fea95b907a80a88b9f19819ccc547d8e7cd7ddae43"
	I1102 13:16:11.454768  302666 cri.go:89] found id: "ba2b8cd401ace9132335713de0f6619fc89d02ed1a60281902f918001c3a9bc6"
	I1102 13:16:11.454774  302666 cri.go:89] found id: "ae6a81713fca42870850a9a5e0a86e40858cbf49ccdf8f4b701bb7c58d5b250d"
	I1102 13:16:11.454778  302666 cri.go:89] found id: "e520da42d44eee8e7e351ea85bd1e8a1fec19b3c33ded4f2a1188baef7b927e3"
	I1102 13:16:11.454780  302666 cri.go:89] found id: "47bfba99e6f299e3b3448bc8864faaedc77b8f94e548ef086dc4f5981ae0360a"
	I1102 13:16:11.454783  302666 cri.go:89] found id: ""
	I1102 13:16:11.454842  302666 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 13:16:11.469706  302666 out.go:203] 
	W1102 13:16:11.472706  302666 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:16:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:16:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 13:16:11.472812  302666 out.go:285] * 
	* 
	W1102 13:16:11.479221  302666 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 13:16:11.482262  302666 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-230560 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.17s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-5rtv5" [19801f4c-5f26-46d2-893d-cd0c602f2457] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003447108s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-230560 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-230560 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (271.415761ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 13:17:32.988353  304656 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:17:32.990039  304656 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:17:32.990064  304656 out.go:374] Setting ErrFile to fd 2...
	I1102 13:17:32.990070  304656 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:17:32.990357  304656 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 13:17:32.990718  304656 mustload.go:66] Loading cluster: addons-230560
	I1102 13:17:32.991107  304656 config.go:182] Loaded profile config "addons-230560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:17:32.991128  304656 addons.go:607] checking whether the cluster is paused
	I1102 13:17:32.991237  304656 config.go:182] Loaded profile config "addons-230560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:17:32.991257  304656 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:17:32.991766  304656 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:17:33.021654  304656 ssh_runner.go:195] Run: systemctl --version
	I1102 13:17:33.021704  304656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:17:33.041972  304656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:17:33.149382  304656 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:17:33.149481  304656 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:17:33.180430  304656 cri.go:89] found id: "59d3d49e880a7e09fb4d9be850df44733d6d5116185f5d62db1b5c126b574e0b"
	I1102 13:17:33.180452  304656 cri.go:89] found id: "495997c964878856129fa01c98380dbe27e0d3e3399d552d965363043c0ed285"
	I1102 13:17:33.180457  304656 cri.go:89] found id: "92ee410347c1fecfc99fa6c734d7ea23c7a537dc02964ee119f8cc717fcef3e2"
	I1102 13:17:33.180461  304656 cri.go:89] found id: "14fe9bc0e4e3fca54005005e2faed708854fa4e45837404cf9bd640d6b5e2de6"
	I1102 13:17:33.180465  304656 cri.go:89] found id: "849382b87b03aa7df7b3bd0d7677466f19027eeb542e35e25286f1e8249c940e"
	I1102 13:17:33.180468  304656 cri.go:89] found id: "de7641522a90557a5bf20f6e7fc608045762d4951eef39028dd344fa1ec0e246"
	I1102 13:17:33.180472  304656 cri.go:89] found id: "ce226e80e176fd107a1fd4e99d0423900d376d659984557fa242d51fe29175f6"
	I1102 13:17:33.180475  304656 cri.go:89] found id: "43495555e2c69ab9b146d21dd528f268dcc6b5277bef46a2cdd8aac98ed01981"
	I1102 13:17:33.180478  304656 cri.go:89] found id: "23d26c5efd413a919fa01dc11c652b236e497eb2943a1a1cfaf21109a227fdf8"
	I1102 13:17:33.180488  304656 cri.go:89] found id: "f4000d22ba555b95620554ea649b6b0e65ff2c8de55597628a09a4936558b721"
	I1102 13:17:33.180492  304656 cri.go:89] found id: "571d698a41a0bf933525b4655374feb95afed1edb2640617ab7511cce65f0776"
	I1102 13:17:33.180495  304656 cri.go:89] found id: "b05b32f995002607af838c0a5ffed270958eaf8c7f841b88122803f35d8d2015"
	I1102 13:17:33.180498  304656 cri.go:89] found id: "ece119ee391be38c1a4f223d48708f601e4910a7734c54cbe59f4c38812974b5"
	I1102 13:17:33.180501  304656 cri.go:89] found id: "7d130b18d8ef12edee3e0d7b593a71e0c4b5690b982edfbbf83860e1b5d40c73"
	I1102 13:17:33.180505  304656 cri.go:89] found id: "01cc86f91cc933f1117d93925d4304fd9b0729b04f70bdfda8a3027baef7c8e9"
	I1102 13:17:33.180511  304656 cri.go:89] found id: "7c311915f4fbc67516c8e9c0534f2b294964f9597b308ef3f1372ad8d0e1b2d5"
	I1102 13:17:33.180518  304656 cri.go:89] found id: "2d7e91ed3fc10a735909e92c3d70b5422345ba649e0f465bf27dbb923af7877c"
	I1102 13:17:33.180524  304656 cri.go:89] found id: "b8f72f36b8b681e6188a6ae20fbb9399b5a1bba3a9e3fa05f0101b5f7bd14aac"
	I1102 13:17:33.180527  304656 cri.go:89] found id: "7c3129e8902e2ba546ec94fea95b907a80a88b9f19819ccc547d8e7cd7ddae43"
	I1102 13:17:33.180530  304656 cri.go:89] found id: "ba2b8cd401ace9132335713de0f6619fc89d02ed1a60281902f918001c3a9bc6"
	I1102 13:17:33.180534  304656 cri.go:89] found id: "ae6a81713fca42870850a9a5e0a86e40858cbf49ccdf8f4b701bb7c58d5b250d"
	I1102 13:17:33.180541  304656 cri.go:89] found id: "e520da42d44eee8e7e351ea85bd1e8a1fec19b3c33ded4f2a1188baef7b927e3"
	I1102 13:17:33.180544  304656 cri.go:89] found id: "47bfba99e6f299e3b3448bc8864faaedc77b8f94e548ef086dc4f5981ae0360a"
	I1102 13:17:33.180547  304656 cri.go:89] found id: ""
	I1102 13:17:33.180599  304656 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 13:17:33.195649  304656 out.go:203] 
	W1102 13:17:33.198599  304656 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:17:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:17:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 13:17:33.198732  304656 out.go:285] * 
	* 
	W1102 13:17:33.205225  304656 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 13:17:33.208246  304656 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-230560 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (6.28s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.4s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-230560 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-230560 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-230560 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-230560 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-230560 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-230560 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-230560 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [314ba22a-0f74-4b5f-b747-bf2732731896] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [314ba22a-0f74-4b5f-b747-bf2732731896] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [314ba22a-0f74-4b5f-b747-bf2732731896] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.00330297s
addons_test.go:967: (dbg) Run:  kubectl --context addons-230560 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-230560 ssh "cat /opt/local-path-provisioner/pvc-1b5bd828-581e-49b1-bd61-f61335a71fd0_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-230560 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-230560 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-230560 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-230560 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (269.780935ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 13:17:26.716510  304544 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:17:26.717315  304544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:17:26.717350  304544 out.go:374] Setting ErrFile to fd 2...
	I1102 13:17:26.717374  304544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:17:26.717698  304544 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 13:17:26.718063  304544 mustload.go:66] Loading cluster: addons-230560
	I1102 13:17:26.718505  304544 config.go:182] Loaded profile config "addons-230560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:17:26.718546  304544 addons.go:607] checking whether the cluster is paused
	I1102 13:17:26.718728  304544 config.go:182] Loaded profile config "addons-230560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:17:26.718764  304544 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:17:26.719254  304544 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:17:26.740589  304544 ssh_runner.go:195] Run: systemctl --version
	I1102 13:17:26.740642  304544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:17:26.757199  304544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:17:26.865236  304544 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:17:26.865407  304544 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:17:26.895420  304544 cri.go:89] found id: "59d3d49e880a7e09fb4d9be850df44733d6d5116185f5d62db1b5c126b574e0b"
	I1102 13:17:26.895444  304544 cri.go:89] found id: "495997c964878856129fa01c98380dbe27e0d3e3399d552d965363043c0ed285"
	I1102 13:17:26.895449  304544 cri.go:89] found id: "92ee410347c1fecfc99fa6c734d7ea23c7a537dc02964ee119f8cc717fcef3e2"
	I1102 13:17:26.895454  304544 cri.go:89] found id: "14fe9bc0e4e3fca54005005e2faed708854fa4e45837404cf9bd640d6b5e2de6"
	I1102 13:17:26.895457  304544 cri.go:89] found id: "849382b87b03aa7df7b3bd0d7677466f19027eeb542e35e25286f1e8249c940e"
	I1102 13:17:26.895461  304544 cri.go:89] found id: "de7641522a90557a5bf20f6e7fc608045762d4951eef39028dd344fa1ec0e246"
	I1102 13:17:26.895464  304544 cri.go:89] found id: "ce226e80e176fd107a1fd4e99d0423900d376d659984557fa242d51fe29175f6"
	I1102 13:17:26.895467  304544 cri.go:89] found id: "43495555e2c69ab9b146d21dd528f268dcc6b5277bef46a2cdd8aac98ed01981"
	I1102 13:17:26.895470  304544 cri.go:89] found id: "23d26c5efd413a919fa01dc11c652b236e497eb2943a1a1cfaf21109a227fdf8"
	I1102 13:17:26.895479  304544 cri.go:89] found id: "f4000d22ba555b95620554ea649b6b0e65ff2c8de55597628a09a4936558b721"
	I1102 13:17:26.895483  304544 cri.go:89] found id: "571d698a41a0bf933525b4655374feb95afed1edb2640617ab7511cce65f0776"
	I1102 13:17:26.895487  304544 cri.go:89] found id: "b05b32f995002607af838c0a5ffed270958eaf8c7f841b88122803f35d8d2015"
	I1102 13:17:26.895490  304544 cri.go:89] found id: "ece119ee391be38c1a4f223d48708f601e4910a7734c54cbe59f4c38812974b5"
	I1102 13:17:26.895494  304544 cri.go:89] found id: "7d130b18d8ef12edee3e0d7b593a71e0c4b5690b982edfbbf83860e1b5d40c73"
	I1102 13:17:26.895531  304544 cri.go:89] found id: "01cc86f91cc933f1117d93925d4304fd9b0729b04f70bdfda8a3027baef7c8e9"
	I1102 13:17:26.895546  304544 cri.go:89] found id: "7c311915f4fbc67516c8e9c0534f2b294964f9597b308ef3f1372ad8d0e1b2d5"
	I1102 13:17:26.895550  304544 cri.go:89] found id: "2d7e91ed3fc10a735909e92c3d70b5422345ba649e0f465bf27dbb923af7877c"
	I1102 13:17:26.895555  304544 cri.go:89] found id: "b8f72f36b8b681e6188a6ae20fbb9399b5a1bba3a9e3fa05f0101b5f7bd14aac"
	I1102 13:17:26.895559  304544 cri.go:89] found id: "7c3129e8902e2ba546ec94fea95b907a80a88b9f19819ccc547d8e7cd7ddae43"
	I1102 13:17:26.895563  304544 cri.go:89] found id: "ba2b8cd401ace9132335713de0f6619fc89d02ed1a60281902f918001c3a9bc6"
	I1102 13:17:26.895567  304544 cri.go:89] found id: "ae6a81713fca42870850a9a5e0a86e40858cbf49ccdf8f4b701bb7c58d5b250d"
	I1102 13:17:26.895579  304544 cri.go:89] found id: "e520da42d44eee8e7e351ea85bd1e8a1fec19b3c33ded4f2a1188baef7b927e3"
	I1102 13:17:26.895583  304544 cri.go:89] found id: "47bfba99e6f299e3b3448bc8864faaedc77b8f94e548ef086dc4f5981ae0360a"
	I1102 13:17:26.895586  304544 cri.go:89] found id: ""
	I1102 13:17:26.895635  304544 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 13:17:26.912516  304544 out.go:203] 
	W1102 13:17:26.916268  304544 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:17:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:17:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 13:17:26.916300  304544 out.go:285] * 
	* 
	W1102 13:17:26.922825  304544 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 13:17:26.926365  304544 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-230560 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.40s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.29s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-qkqx4" [8db4aae3-2657-4773-b5b6-62fb681edaa0] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005244941s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-230560 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-230560 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (283.084931ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 13:17:13.008175  304181 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:17:13.009450  304181 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:17:13.009476  304181 out.go:374] Setting ErrFile to fd 2...
	I1102 13:17:13.009483  304181 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:17:13.009916  304181 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 13:17:13.010320  304181 mustload.go:66] Loading cluster: addons-230560
	I1102 13:17:13.011357  304181 config.go:182] Loaded profile config "addons-230560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:17:13.011378  304181 addons.go:607] checking whether the cluster is paused
	I1102 13:17:13.011566  304181 config.go:182] Loaded profile config "addons-230560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:17:13.011588  304181 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:17:13.012380  304181 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:17:13.038105  304181 ssh_runner.go:195] Run: systemctl --version
	I1102 13:17:13.038173  304181 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:17:13.058759  304181 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:17:13.173941  304181 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:17:13.174100  304181 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:17:13.204337  304181 cri.go:89] found id: "59d3d49e880a7e09fb4d9be850df44733d6d5116185f5d62db1b5c126b574e0b"
	I1102 13:17:13.204361  304181 cri.go:89] found id: "495997c964878856129fa01c98380dbe27e0d3e3399d552d965363043c0ed285"
	I1102 13:17:13.204366  304181 cri.go:89] found id: "92ee410347c1fecfc99fa6c734d7ea23c7a537dc02964ee119f8cc717fcef3e2"
	I1102 13:17:13.204371  304181 cri.go:89] found id: "14fe9bc0e4e3fca54005005e2faed708854fa4e45837404cf9bd640d6b5e2de6"
	I1102 13:17:13.204375  304181 cri.go:89] found id: "849382b87b03aa7df7b3bd0d7677466f19027eeb542e35e25286f1e8249c940e"
	I1102 13:17:13.204378  304181 cri.go:89] found id: "de7641522a90557a5bf20f6e7fc608045762d4951eef39028dd344fa1ec0e246"
	I1102 13:17:13.204381  304181 cri.go:89] found id: "ce226e80e176fd107a1fd4e99d0423900d376d659984557fa242d51fe29175f6"
	I1102 13:17:13.204384  304181 cri.go:89] found id: "43495555e2c69ab9b146d21dd528f268dcc6b5277bef46a2cdd8aac98ed01981"
	I1102 13:17:13.204387  304181 cri.go:89] found id: "23d26c5efd413a919fa01dc11c652b236e497eb2943a1a1cfaf21109a227fdf8"
	I1102 13:17:13.204394  304181 cri.go:89] found id: "f4000d22ba555b95620554ea649b6b0e65ff2c8de55597628a09a4936558b721"
	I1102 13:17:13.204398  304181 cri.go:89] found id: "571d698a41a0bf933525b4655374feb95afed1edb2640617ab7511cce65f0776"
	I1102 13:17:13.204401  304181 cri.go:89] found id: "b05b32f995002607af838c0a5ffed270958eaf8c7f841b88122803f35d8d2015"
	I1102 13:17:13.204410  304181 cri.go:89] found id: "ece119ee391be38c1a4f223d48708f601e4910a7734c54cbe59f4c38812974b5"
	I1102 13:17:13.204414  304181 cri.go:89] found id: "7d130b18d8ef12edee3e0d7b593a71e0c4b5690b982edfbbf83860e1b5d40c73"
	I1102 13:17:13.204417  304181 cri.go:89] found id: "01cc86f91cc933f1117d93925d4304fd9b0729b04f70bdfda8a3027baef7c8e9"
	I1102 13:17:13.204425  304181 cri.go:89] found id: "7c311915f4fbc67516c8e9c0534f2b294964f9597b308ef3f1372ad8d0e1b2d5"
	I1102 13:17:13.204436  304181 cri.go:89] found id: "2d7e91ed3fc10a735909e92c3d70b5422345ba649e0f465bf27dbb923af7877c"
	I1102 13:17:13.204442  304181 cri.go:89] found id: "b8f72f36b8b681e6188a6ae20fbb9399b5a1bba3a9e3fa05f0101b5f7bd14aac"
	I1102 13:17:13.204445  304181 cri.go:89] found id: "7c3129e8902e2ba546ec94fea95b907a80a88b9f19819ccc547d8e7cd7ddae43"
	I1102 13:17:13.204448  304181 cri.go:89] found id: "ba2b8cd401ace9132335713de0f6619fc89d02ed1a60281902f918001c3a9bc6"
	I1102 13:17:13.204454  304181 cri.go:89] found id: "ae6a81713fca42870850a9a5e0a86e40858cbf49ccdf8f4b701bb7c58d5b250d"
	I1102 13:17:13.204456  304181 cri.go:89] found id: "e520da42d44eee8e7e351ea85bd1e8a1fec19b3c33ded4f2a1188baef7b927e3"
	I1102 13:17:13.204459  304181 cri.go:89] found id: "47bfba99e6f299e3b3448bc8864faaedc77b8f94e548ef086dc4f5981ae0360a"
	I1102 13:17:13.204462  304181 cri.go:89] found id: ""
	I1102 13:17:13.204512  304181 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 13:17:13.220182  304181 out.go:203] 
	W1102 13:17:13.223322  304181 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:17:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:17:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 13:17:13.223351  304181 out.go:285] * 
	* 
	W1102 13:17:13.229673  304181 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 13:17:13.232601  304181 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-230560 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.29s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-j4lzk" [33c0f56f-943b-4624-96d8-bee5d1268363] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004316841s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-230560 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-230560 addons disable yakd --alsologtostderr -v=1: exit status 11 (280.747024ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 13:17:18.299685  304251 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:17:18.300526  304251 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:17:18.300586  304251 out.go:374] Setting ErrFile to fd 2...
	I1102 13:17:18.300609  304251 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:17:18.300972  304251 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 13:17:18.301328  304251 mustload.go:66] Loading cluster: addons-230560
	I1102 13:17:18.301757  304251 config.go:182] Loaded profile config "addons-230560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:17:18.301804  304251 addons.go:607] checking whether the cluster is paused
	I1102 13:17:18.301934  304251 config.go:182] Loaded profile config "addons-230560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:17:18.301969  304251 host.go:66] Checking if "addons-230560" exists ...
	I1102 13:17:18.302466  304251 cli_runner.go:164] Run: docker container inspect addons-230560 --format={{.State.Status}}
	I1102 13:17:18.319617  304251 ssh_runner.go:195] Run: systemctl --version
	I1102 13:17:18.319674  304251 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-230560
	I1102 13:17:18.340393  304251 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/addons-230560/id_rsa Username:docker}
	I1102 13:17:18.453137  304251 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:17:18.453243  304251 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:17:18.486532  304251 cri.go:89] found id: "59d3d49e880a7e09fb4d9be850df44733d6d5116185f5d62db1b5c126b574e0b"
	I1102 13:17:18.486655  304251 cri.go:89] found id: "495997c964878856129fa01c98380dbe27e0d3e3399d552d965363043c0ed285"
	I1102 13:17:18.486682  304251 cri.go:89] found id: "92ee410347c1fecfc99fa6c734d7ea23c7a537dc02964ee119f8cc717fcef3e2"
	I1102 13:17:18.486695  304251 cri.go:89] found id: "14fe9bc0e4e3fca54005005e2faed708854fa4e45837404cf9bd640d6b5e2de6"
	I1102 13:17:18.486700  304251 cri.go:89] found id: "849382b87b03aa7df7b3bd0d7677466f19027eeb542e35e25286f1e8249c940e"
	I1102 13:17:18.486704  304251 cri.go:89] found id: "de7641522a90557a5bf20f6e7fc608045762d4951eef39028dd344fa1ec0e246"
	I1102 13:17:18.486708  304251 cri.go:89] found id: "ce226e80e176fd107a1fd4e99d0423900d376d659984557fa242d51fe29175f6"
	I1102 13:17:18.486712  304251 cri.go:89] found id: "43495555e2c69ab9b146d21dd528f268dcc6b5277bef46a2cdd8aac98ed01981"
	I1102 13:17:18.486715  304251 cri.go:89] found id: "23d26c5efd413a919fa01dc11c652b236e497eb2943a1a1cfaf21109a227fdf8"
	I1102 13:17:18.486722  304251 cri.go:89] found id: "f4000d22ba555b95620554ea649b6b0e65ff2c8de55597628a09a4936558b721"
	I1102 13:17:18.486726  304251 cri.go:89] found id: "571d698a41a0bf933525b4655374feb95afed1edb2640617ab7511cce65f0776"
	I1102 13:17:18.486729  304251 cri.go:89] found id: "b05b32f995002607af838c0a5ffed270958eaf8c7f841b88122803f35d8d2015"
	I1102 13:17:18.486732  304251 cri.go:89] found id: "ece119ee391be38c1a4f223d48708f601e4910a7734c54cbe59f4c38812974b5"
	I1102 13:17:18.486735  304251 cri.go:89] found id: "7d130b18d8ef12edee3e0d7b593a71e0c4b5690b982edfbbf83860e1b5d40c73"
	I1102 13:17:18.486739  304251 cri.go:89] found id: "01cc86f91cc933f1117d93925d4304fd9b0729b04f70bdfda8a3027baef7c8e9"
	I1102 13:17:18.486765  304251 cri.go:89] found id: "7c311915f4fbc67516c8e9c0534f2b294964f9597b308ef3f1372ad8d0e1b2d5"
	I1102 13:17:18.486775  304251 cri.go:89] found id: "2d7e91ed3fc10a735909e92c3d70b5422345ba649e0f465bf27dbb923af7877c"
	I1102 13:17:18.486781  304251 cri.go:89] found id: "b8f72f36b8b681e6188a6ae20fbb9399b5a1bba3a9e3fa05f0101b5f7bd14aac"
	I1102 13:17:18.486784  304251 cri.go:89] found id: "7c3129e8902e2ba546ec94fea95b907a80a88b9f19819ccc547d8e7cd7ddae43"
	I1102 13:17:18.486787  304251 cri.go:89] found id: "ba2b8cd401ace9132335713de0f6619fc89d02ed1a60281902f918001c3a9bc6"
	I1102 13:17:18.486793  304251 cri.go:89] found id: "ae6a81713fca42870850a9a5e0a86e40858cbf49ccdf8f4b701bb7c58d5b250d"
	I1102 13:17:18.486796  304251 cri.go:89] found id: "e520da42d44eee8e7e351ea85bd1e8a1fec19b3c33ded4f2a1188baef7b927e3"
	I1102 13:17:18.486799  304251 cri.go:89] found id: "47bfba99e6f299e3b3448bc8864faaedc77b8f94e548ef086dc4f5981ae0360a"
	I1102 13:17:18.486801  304251 cri.go:89] found id: ""
	I1102 13:17:18.486854  304251 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 13:17:18.508479  304251 out.go:203] 
	W1102 13:17:18.511211  304251 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:17:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:17:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 13:17:18.511229  304251 out.go:285] * 
	* 
	W1102 13:17:18.517668  304251 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 13:17:18.520699  304251 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-230560 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-082350 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-082350 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-skmb2" [33cb6995-bc92-4922-b9f4-f4ca9f69abca] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-082350 -n functional-082350
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-02 13:32:59.906406093 +0000 UTC m=+1213.175024109
functional_test.go:1645: (dbg) Run:  kubectl --context functional-082350 describe po hello-node-connect-7d85dfc575-skmb2 -n default
functional_test.go:1645: (dbg) kubectl --context functional-082350 describe po hello-node-connect-7d85dfc575-skmb2 -n default:
Name:             hello-node-connect-7d85dfc575-skmb2
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-082350/192.168.49.2
Start Time:       Sun, 02 Nov 2025 13:22:59 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hd7jr (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-hd7jr:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-skmb2 to functional-082350
Normal   Pulling    7m6s (x5 over 9m59s)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m6s (x5 over 9m59s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m6s (x5 over 9m59s)    kubelet            Error: ErrImagePull
Normal   BackOff    4m51s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m51s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-082350 logs hello-node-connect-7d85dfc575-skmb2 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-082350 logs hello-node-connect-7d85dfc575-skmb2 -n default: exit status 1 (291.090197ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-skmb2" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-082350 logs hello-node-connect-7d85dfc575-skmb2 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-082350 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-skmb2
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-082350/192.168.49.2
Start Time:       Sun, 02 Nov 2025 13:22:59 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hd7jr (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-hd7jr:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-skmb2 to functional-082350
Normal   Pulling    7m6s (x5 over 9m59s)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m6s (x5 over 9m59s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m6s (x5 over 9m59s)    kubelet            Error: ErrImagePull
Normal   BackOff    4m51s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m51s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-082350 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-082350 logs -l app=hello-node-connect: exit status 1 (115.583152ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-skmb2" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-082350 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-082350 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.101.33.149
IPs:                      10.101.33.149
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30701/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-082350
helpers_test.go:243: (dbg) docker inspect functional-082350:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "41f6f8f4cd1ec6452188be1fbcabbdb305dcfd050ce50d8d33de7825ab3dd916",
	        "Created": "2025-11-02T13:20:15.813210932Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 311164,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-02T13:20:15.889008513Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/41f6f8f4cd1ec6452188be1fbcabbdb305dcfd050ce50d8d33de7825ab3dd916/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/41f6f8f4cd1ec6452188be1fbcabbdb305dcfd050ce50d8d33de7825ab3dd916/hostname",
	        "HostsPath": "/var/lib/docker/containers/41f6f8f4cd1ec6452188be1fbcabbdb305dcfd050ce50d8d33de7825ab3dd916/hosts",
	        "LogPath": "/var/lib/docker/containers/41f6f8f4cd1ec6452188be1fbcabbdb305dcfd050ce50d8d33de7825ab3dd916/41f6f8f4cd1ec6452188be1fbcabbdb305dcfd050ce50d8d33de7825ab3dd916-json.log",
	        "Name": "/functional-082350",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-082350:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-082350",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "41f6f8f4cd1ec6452188be1fbcabbdb305dcfd050ce50d8d33de7825ab3dd916",
	                "LowerDir": "/var/lib/docker/overlay2/20f9e8d4bbca3b89240c1c099504baf53bdebe50d6b1d234e9748be437f74e80-init/diff:/var/lib/docker/overlay2/53058afe6acd7391f639f551e07f446f6a39a991b699aea18507cb21a1652e97/diff",
	                "MergedDir": "/var/lib/docker/overlay2/20f9e8d4bbca3b89240c1c099504baf53bdebe50d6b1d234e9748be437f74e80/merged",
	                "UpperDir": "/var/lib/docker/overlay2/20f9e8d4bbca3b89240c1c099504baf53bdebe50d6b1d234e9748be437f74e80/diff",
	                "WorkDir": "/var/lib/docker/overlay2/20f9e8d4bbca3b89240c1c099504baf53bdebe50d6b1d234e9748be437f74e80/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-082350",
	                "Source": "/var/lib/docker/volumes/functional-082350/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-082350",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-082350",
	                "name.minikube.sigs.k8s.io": "functional-082350",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cc88c22cc4c04b1f8075cee1260bc30a29121ad7ac22dd222e66783b002f574a",
	            "SandboxKey": "/var/run/docker/netns/cc88c22cc4c0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-082350": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:4b:77:d3:07:41",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "77bb48f4c24b549bdaee9322882f9232883504309b1a6474e0755a827ba3addc",
	                    "EndpointID": "db5c40d496c5ed0fb30a5e256cfae05e805e111bb110d47595724d0fe0749ce7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-082350",
	                        "41f6f8f4cd1e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-082350 -n functional-082350
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-082350 logs -n 25: (1.440307268s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                   ARGS                                                   │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache   │ delete registry.k8s.io/pause:3.3                                                                         │ minikube          │ jenkins │ v1.37.0 │ 02 Nov 25 13:22 UTC │ 02 Nov 25 13:22 UTC │
	│ cache   │ list                                                                                                     │ minikube          │ jenkins │ v1.37.0 │ 02 Nov 25 13:22 UTC │ 02 Nov 25 13:22 UTC │
	│ ssh     │ functional-082350 ssh sudo crictl images                                                                 │ functional-082350 │ jenkins │ v1.37.0 │ 02 Nov 25 13:22 UTC │ 02 Nov 25 13:22 UTC │
	│ ssh     │ functional-082350 ssh sudo crictl rmi registry.k8s.io/pause:latest                                       │ functional-082350 │ jenkins │ v1.37.0 │ 02 Nov 25 13:22 UTC │ 02 Nov 25 13:22 UTC │
	│ ssh     │ functional-082350 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                  │ functional-082350 │ jenkins │ v1.37.0 │ 02 Nov 25 13:22 UTC │                     │
	│ cache   │ functional-082350 cache reload                                                                           │ functional-082350 │ jenkins │ v1.37.0 │ 02 Nov 25 13:22 UTC │ 02 Nov 25 13:22 UTC │
	│ ssh     │ functional-082350 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                  │ functional-082350 │ jenkins │ v1.37.0 │ 02 Nov 25 13:22 UTC │ 02 Nov 25 13:22 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                         │ minikube          │ jenkins │ v1.37.0 │ 02 Nov 25 13:22 UTC │ 02 Nov 25 13:22 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                      │ minikube          │ jenkins │ v1.37.0 │ 02 Nov 25 13:22 UTC │ 02 Nov 25 13:22 UTC │
	│ kubectl │ functional-082350 kubectl -- --context functional-082350 get pods                                        │ functional-082350 │ jenkins │ v1.37.0 │ 02 Nov 25 13:22 UTC │ 02 Nov 25 13:22 UTC │
	│ start   │ -p functional-082350 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all │ functional-082350 │ jenkins │ v1.37.0 │ 02 Nov 25 13:22 UTC │ 02 Nov 25 13:22 UTC │
	│ service │ invalid-svc -p functional-082350                                                                         │ functional-082350 │ jenkins │ v1.37.0 │ 02 Nov 25 13:22 UTC │                     │
	│ ssh     │ functional-082350 ssh echo hello                                                                         │ functional-082350 │ jenkins │ v1.37.0 │ 02 Nov 25 13:22 UTC │ 02 Nov 25 13:22 UTC │
	│ config  │ functional-082350 config unset cpus                                                                      │ functional-082350 │ jenkins │ v1.37.0 │ 02 Nov 25 13:22 UTC │ 02 Nov 25 13:22 UTC │
	│ config  │ functional-082350 config get cpus                                                                        │ functional-082350 │ jenkins │ v1.37.0 │ 02 Nov 25 13:22 UTC │                     │
	│ config  │ functional-082350 config set cpus 2                                                                      │ functional-082350 │ jenkins │ v1.37.0 │ 02 Nov 25 13:22 UTC │ 02 Nov 25 13:22 UTC │
	│ config  │ functional-082350 config get cpus                                                                        │ functional-082350 │ jenkins │ v1.37.0 │ 02 Nov 25 13:22 UTC │ 02 Nov 25 13:22 UTC │
	│ ssh     │ functional-082350 ssh cat /etc/hostname                                                                  │ functional-082350 │ jenkins │ v1.37.0 │ 02 Nov 25 13:22 UTC │ 02 Nov 25 13:22 UTC │
	│ config  │ functional-082350 config unset cpus                                                                      │ functional-082350 │ jenkins │ v1.37.0 │ 02 Nov 25 13:22 UTC │ 02 Nov 25 13:22 UTC │
	│ config  │ functional-082350 config get cpus                                                                        │ functional-082350 │ jenkins │ v1.37.0 │ 02 Nov 25 13:22 UTC │                     │
	│ tunnel  │ functional-082350 tunnel --alsologtostderr                                                               │ functional-082350 │ jenkins │ v1.37.0 │ 02 Nov 25 13:22 UTC │                     │
	│ tunnel  │ functional-082350 tunnel --alsologtostderr                                                               │ functional-082350 │ jenkins │ v1.37.0 │ 02 Nov 25 13:22 UTC │                     │
	│ tunnel  │ functional-082350 tunnel --alsologtostderr                                                               │ functional-082350 │ jenkins │ v1.37.0 │ 02 Nov 25 13:22 UTC │                     │
	│ addons  │ functional-082350 addons list                                                                            │ functional-082350 │ jenkins │ v1.37.0 │ 02 Nov 25 13:22 UTC │ 02 Nov 25 13:22 UTC │
	│ addons  │ functional-082350 addons list -o json                                                                    │ functional-082350 │ jenkins │ v1.37.0 │ 02 Nov 25 13:22 UTC │ 02 Nov 25 13:22 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 13:22:08
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 13:22:08.039979  315345 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:22:08.040128  315345 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:22:08.040132  315345 out.go:374] Setting ErrFile to fd 2...
	I1102 13:22:08.040136  315345 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:22:08.040430  315345 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 13:22:08.040887  315345 out.go:368] Setting JSON to false
	I1102 13:22:08.041940  315345 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7480,"bootTime":1762082248,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1102 13:22:08.042013  315345 start.go:143] virtualization:  
	I1102 13:22:08.045480  315345 out.go:179] * [functional-082350] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1102 13:22:08.049317  315345 notify.go:221] Checking for updates...
	I1102 13:22:08.052492  315345 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 13:22:08.055648  315345 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 13:22:08.058706  315345 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-293314/kubeconfig
	I1102 13:22:08.061625  315345 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-293314/.minikube
	I1102 13:22:08.064683  315345 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1102 13:22:08.067685  315345 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 13:22:08.071230  315345 config.go:182] Loaded profile config "functional-082350": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:22:08.071324  315345 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 13:22:08.104767  315345 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1102 13:22:08.104858  315345 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:22:08.163540  315345 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-02 13:22:08.153364602 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 13:22:08.163629  315345 docker.go:319] overlay module found
	I1102 13:22:08.166835  315345 out.go:179] * Using the docker driver based on existing profile
	I1102 13:22:08.169655  315345 start.go:309] selected driver: docker
	I1102 13:22:08.169666  315345 start.go:930] validating driver "docker" against &{Name:functional-082350 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-082350 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:22:08.169749  315345 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 13:22:08.169846  315345 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:22:08.224675  315345 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:true NGoroutines:65 SystemTime:2025-11-02 13:22:08.214743747 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 13:22:08.225108  315345 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 13:22:08.225134  315345 cni.go:84] Creating CNI manager for ""
	I1102 13:22:08.225183  315345 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:22:08.225227  315345 start.go:353] cluster config:
	{Name:functional-082350 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-082350 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:22:08.230374  315345 out.go:179] * Starting "functional-082350" primary control-plane node in "functional-082350" cluster
	I1102 13:22:08.233201  315345 cache.go:124] Beginning downloading kic base image for docker with crio
	I1102 13:22:08.236101  315345 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1102 13:22:08.239325  315345 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:22:08.239376  315345 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1102 13:22:08.239384  315345 cache.go:59] Caching tarball of preloaded images
	I1102 13:22:08.239412  315345 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1102 13:22:08.239486  315345 preload.go:233] Found /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1102 13:22:08.239495  315345 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1102 13:22:08.239611  315345 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/functional-082350/config.json ...
	I1102 13:22:08.258637  315345 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1102 13:22:08.258648  315345 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1102 13:22:08.258665  315345 cache.go:233] Successfully downloaded all kic artifacts
	I1102 13:22:08.258686  315345 start.go:360] acquireMachinesLock for functional-082350: {Name:mkb3adc1281bd62d31202b0a43ecb0ef8c907ec7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 13:22:08.258749  315345 start.go:364] duration metric: took 47.196µs to acquireMachinesLock for "functional-082350"
	I1102 13:22:08.258768  315345 start.go:96] Skipping create...Using existing machine configuration
	I1102 13:22:08.258772  315345 fix.go:54] fixHost starting: 
	I1102 13:22:08.259034  315345 cli_runner.go:164] Run: docker container inspect functional-082350 --format={{.State.Status}}
	I1102 13:22:08.280443  315345 fix.go:112] recreateIfNeeded on functional-082350: state=Running err=<nil>
	W1102 13:22:08.280463  315345 fix.go:138] unexpected machine state, will restart: <nil>
	I1102 13:22:08.283628  315345 out.go:252] * Updating the running docker "functional-082350" container ...
	I1102 13:22:08.283656  315345 machine.go:94] provisionDockerMachine start ...
	I1102 13:22:08.283750  315345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-082350
	I1102 13:22:08.302969  315345 main.go:143] libmachine: Using SSH client type: native
	I1102 13:22:08.303296  315345 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1102 13:22:08.303303  315345 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 13:22:08.454507  315345 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-082350
	
	I1102 13:22:08.454521  315345 ubuntu.go:182] provisioning hostname "functional-082350"
	I1102 13:22:08.454581  315345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-082350
	I1102 13:22:08.472389  315345 main.go:143] libmachine: Using SSH client type: native
	I1102 13:22:08.472703  315345 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1102 13:22:08.472743  315345 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-082350 && echo "functional-082350" | sudo tee /etc/hostname
	I1102 13:22:08.636629  315345 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-082350
	
	I1102 13:22:08.636695  315345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-082350
	I1102 13:22:08.655962  315345 main.go:143] libmachine: Using SSH client type: native
	I1102 13:22:08.656272  315345 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1102 13:22:08.656287  315345 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-082350' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-082350/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-082350' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 13:22:08.806956  315345 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 13:22:08.806973  315345 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-293314/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-293314/.minikube}
	I1102 13:22:08.807000  315345 ubuntu.go:190] setting up certificates
	I1102 13:22:08.807009  315345 provision.go:84] configureAuth start
	I1102 13:22:08.807074  315345 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-082350
	I1102 13:22:08.825764  315345 provision.go:143] copyHostCerts
	I1102 13:22:08.825823  315345 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem, removing ...
	I1102 13:22:08.825840  315345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem
	I1102 13:22:08.825913  315345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem (1675 bytes)
	I1102 13:22:08.826008  315345 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem, removing ...
	I1102 13:22:08.826011  315345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem
	I1102 13:22:08.826040  315345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem (1082 bytes)
	I1102 13:22:08.826089  315345 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem, removing ...
	I1102 13:22:08.826092  315345 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem
	I1102 13:22:08.826113  315345 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem (1123 bytes)
	I1102 13:22:08.826175  315345 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem org=jenkins.functional-082350 san=[127.0.0.1 192.168.49.2 functional-082350 localhost minikube]
	I1102 13:22:09.189068  315345 provision.go:177] copyRemoteCerts
	I1102 13:22:09.189120  315345 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 13:22:09.189170  315345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-082350
	I1102 13:22:09.212649  315345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/functional-082350/id_rsa Username:docker}
	I1102 13:22:09.318393  315345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1102 13:22:09.338921  315345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1102 13:22:09.358419  315345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1102 13:22:09.378784  315345 provision.go:87] duration metric: took 571.761724ms to configureAuth
	I1102 13:22:09.378816  315345 ubuntu.go:206] setting minikube options for container-runtime
	I1102 13:22:09.379022  315345 config.go:182] Loaded profile config "functional-082350": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:22:09.379132  315345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-082350
	I1102 13:22:09.397047  315345 main.go:143] libmachine: Using SSH client type: native
	I1102 13:22:09.397352  315345 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33148 <nil> <nil>}
	I1102 13:22:09.397364  315345 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1102 13:22:14.784027  315345 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1102 13:22:14.784050  315345 machine.go:97] duration metric: took 6.500385s to provisionDockerMachine
	I1102 13:22:14.784063  315345 start.go:293] postStartSetup for "functional-082350" (driver="docker")
	I1102 13:22:14.784074  315345 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1102 13:22:14.784178  315345 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1102 13:22:14.784217  315345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-082350
	I1102 13:22:14.803286  315345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/functional-082350/id_rsa Username:docker}
	I1102 13:22:14.911489  315345 ssh_runner.go:195] Run: cat /etc/os-release
	I1102 13:22:14.915312  315345 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1102 13:22:14.915330  315345 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1102 13:22:14.915341  315345 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-293314/.minikube/addons for local assets ...
	I1102 13:22:14.915409  315345 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-293314/.minikube/files for local assets ...
	I1102 13:22:14.915502  315345 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem -> 2951742.pem in /etc/ssl/certs
	I1102 13:22:14.915581  315345 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/test/nested/copy/295174/hosts -> hosts in /etc/test/nested/copy/295174
	I1102 13:22:14.915622  315345 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/295174
	I1102 13:22:14.924613  315345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem --> /etc/ssl/certs/2951742.pem (1708 bytes)
	I1102 13:22:14.945192  315345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/test/nested/copy/295174/hosts --> /etc/test/nested/copy/295174/hosts (40 bytes)
	I1102 13:22:14.965452  315345 start.go:296] duration metric: took 181.373672ms for postStartSetup
	I1102 13:22:14.965566  315345 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 13:22:14.965620  315345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-082350
	I1102 13:22:14.984251  315345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/functional-082350/id_rsa Username:docker}
	I1102 13:22:15.105262  315345 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1102 13:22:15.111475  315345 fix.go:56] duration metric: took 6.852692494s for fixHost
	I1102 13:22:15.111490  315345 start.go:83] releasing machines lock for "functional-082350", held for 6.852733717s
	I1102 13:22:15.111576  315345 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-082350
	I1102 13:22:15.131549  315345 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem (1338 bytes)
	W1102 13:22:15.131596  315345 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174_empty.pem, impossibly tiny 0 bytes
	I1102 13:22:15.131605  315345 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem (1679 bytes)
	I1102 13:22:15.131636  315345 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem (1082 bytes)
	I1102 13:22:15.131661  315345 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:22:15.131687  315345 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem (1675 bytes)
	I1102 13:22:15.131727  315345 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem (1708 bytes)
	I1102 13:22:15.131813  315345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem --> /usr/share/ca-certificates/2951742.pem (1708 bytes)
	I1102 13:22:15.131888  315345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-082350
	I1102 13:22:15.150381  315345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/functional-082350/id_rsa Username:docker}
	I1102 13:22:15.270734  315345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:22:15.291983  315345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem --> /usr/share/ca-certificates/295174.pem (1338 bytes)
	I1102 13:22:15.311384  315345 ssh_runner.go:195] Run: openssl version
	I1102 13:22:15.318853  315345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951742.pem && ln -fs /usr/share/ca-certificates/2951742.pem /etc/ssl/certs/2951742.pem"
	I1102 13:22:15.328178  315345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951742.pem
	I1102 13:22:15.332262  315345 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 13:20 /usr/share/ca-certificates/2951742.pem
	I1102 13:22:15.332319  315345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951742.pem
	I1102 13:22:15.375241  315345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951742.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:22:15.384953  315345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:22:15.393895  315345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:22:15.397888  315345 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 13:13 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:22:15.397947  315345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:22:15.440617  315345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:22:15.449475  315345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295174.pem && ln -fs /usr/share/ca-certificates/295174.pem /etc/ssl/certs/295174.pem"
	I1102 13:22:15.458376  315345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295174.pem
	I1102 13:22:15.462261  315345 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 13:20 /usr/share/ca-certificates/295174.pem
	I1102 13:22:15.462322  315345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295174.pem
	I1102 13:22:15.505028  315345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295174.pem /etc/ssl/certs/51391683.0"
	I1102 13:22:15.513335  315345 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1102 13:22:15.516798  315345 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1102 13:22:15.520953  315345 ssh_runner.go:195] Run: cat /version.json
	I1102 13:22:15.521041  315345 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1102 13:22:15.525318  315345 ssh_runner.go:195] Run: systemctl --version
	I1102 13:22:15.612681  315345 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1102 13:22:15.649975  315345 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1102 13:22:15.654468  315345 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1102 13:22:15.654537  315345 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1102 13:22:15.662653  315345 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1102 13:22:15.662668  315345 start.go:496] detecting cgroup driver to use...
	I1102 13:22:15.662700  315345 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1102 13:22:15.662745  315345 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1102 13:22:15.678550  315345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1102 13:22:15.691805  315345 docker.go:218] disabling cri-docker service (if available) ...
	I1102 13:22:15.691856  315345 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1102 13:22:15.707754  315345 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1102 13:22:15.721198  315345 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1102 13:22:15.861125  315345 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1102 13:22:16.007599  315345 docker.go:234] disabling docker service ...
	I1102 13:22:16.007660  315345 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1102 13:22:16.024557  315345 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1102 13:22:16.039492  315345 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1102 13:22:16.181407  315345 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1102 13:22:16.327353  315345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1102 13:22:16.340766  315345 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1102 13:22:16.354990  315345 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1102 13:22:16.355051  315345 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:22:16.363849  315345 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1102 13:22:16.363914  315345 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:22:16.373014  315345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:22:16.381912  315345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:22:16.390722  315345 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1102 13:22:16.398858  315345 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:22:16.407777  315345 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:22:16.416148  315345 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 13:22:16.424798  315345 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1102 13:22:16.432276  315345 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1102 13:22:16.439622  315345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:22:16.572915  315345 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1102 13:22:16.785722  315345 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1102 13:22:16.785781  315345 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1102 13:22:16.789890  315345 start.go:564] Will wait 60s for crictl version
	I1102 13:22:16.789946  315345 ssh_runner.go:195] Run: which crictl
	I1102 13:22:16.794185  315345 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1102 13:22:16.828172  315345 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1102 13:22:16.828253  315345 ssh_runner.go:195] Run: crio --version
	I1102 13:22:16.869332  315345 ssh_runner.go:195] Run: crio --version
	I1102 13:22:16.915568  315345 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1102 13:22:16.919357  315345 cli_runner.go:164] Run: docker network inspect functional-082350 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 13:22:16.942775  315345 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1102 13:22:16.951143  315345 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1102 13:22:16.954271  315345 kubeadm.go:884] updating cluster {Name:functional-082350 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-082350 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1102 13:22:16.954423  315345 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 13:22:16.954498  315345 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:22:16.995555  315345 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:22:16.995565  315345 crio.go:433] Images already preloaded, skipping extraction
	I1102 13:22:16.995623  315345 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 13:22:17.027887  315345 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 13:22:17.027905  315345 cache_images.go:86] Images are preloaded, skipping loading
	I1102 13:22:17.027911  315345 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.34.1 crio true true} ...
	I1102 13:22:17.028007  315345 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-082350 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:functional-082350 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1102 13:22:17.028115  315345 ssh_runner.go:195] Run: crio config
	I1102 13:22:17.090718  315345 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1102 13:22:17.090736  315345 cni.go:84] Creating CNI manager for ""
	I1102 13:22:17.090745  315345 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:22:17.090753  315345 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1102 13:22:17.090775  315345 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-082350 NodeName:functional-082350 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:ma
p[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1102 13:22:17.090897  315345 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-082350"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1102 13:22:17.090960  315345 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1102 13:22:17.099130  315345 binaries.go:44] Found k8s binaries, skipping transfer
	I1102 13:22:17.099200  315345 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1102 13:22:17.106825  315345 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1102 13:22:17.119607  315345 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1102 13:22:17.133040  315345 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2064 bytes)
	I1102 13:22:17.146202  315345 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1102 13:22:17.149950  315345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:22:17.280532  315345 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:22:17.293980  315345 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/functional-082350 for IP: 192.168.49.2
	I1102 13:22:17.293991  315345 certs.go:195] generating shared ca certs ...
	I1102 13:22:17.294006  315345 certs.go:227] acquiring lock for ca certs: {Name:mkead50075949a3cdc798f9c0149a2bc2638cbbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:22:17.294142  315345 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-293314/.minikube/ca.key
	I1102 13:22:17.294188  315345 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.key
	I1102 13:22:17.294194  315345 certs.go:257] generating profile certs ...
	I1102 13:22:17.294279  315345 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/functional-082350/client.key
	I1102 13:22:17.294333  315345 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/functional-082350/apiserver.key.182c7ede
	I1102 13:22:17.294373  315345 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/functional-082350/proxy-client.key
	I1102 13:22:17.294489  315345 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem (1338 bytes)
	W1102 13:22:17.294514  315345 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174_empty.pem, impossibly tiny 0 bytes
	I1102 13:22:17.294521  315345 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem (1679 bytes)
	I1102 13:22:17.294544  315345 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem (1082 bytes)
	I1102 13:22:17.294568  315345 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem (1123 bytes)
	I1102 13:22:17.294587  315345 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem (1675 bytes)
	I1102 13:22:17.294659  315345 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem (1708 bytes)
	I1102 13:22:17.295263  315345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1102 13:22:17.314838  315345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1102 13:22:17.333622  315345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1102 13:22:17.352769  315345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1102 13:22:17.370427  315345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/functional-082350/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1102 13:22:17.388428  315345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/functional-082350/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1102 13:22:17.406270  315345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/functional-082350/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1102 13:22:17.424455  315345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/functional-082350/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1102 13:22:17.442687  315345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 13:22:17.460970  315345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem --> /usr/share/ca-certificates/295174.pem (1338 bytes)
	I1102 13:22:17.478256  315345 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem --> /usr/share/ca-certificates/2951742.pem (1708 bytes)
	I1102 13:22:17.496347  315345 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1102 13:22:17.509215  315345 ssh_runner.go:195] Run: openssl version
	I1102 13:22:17.515742  315345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295174.pem && ln -fs /usr/share/ca-certificates/295174.pem /etc/ssl/certs/295174.pem"
	I1102 13:22:17.524444  315345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295174.pem
	I1102 13:22:17.528483  315345 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 13:20 /usr/share/ca-certificates/295174.pem
	I1102 13:22:17.528545  315345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295174.pem
	I1102 13:22:17.570057  315345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295174.pem /etc/ssl/certs/51391683.0"
	I1102 13:22:17.578097  315345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951742.pem && ln -fs /usr/share/ca-certificates/2951742.pem /etc/ssl/certs/2951742.pem"
	I1102 13:22:17.586406  315345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951742.pem
	I1102 13:22:17.590077  315345 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 13:20 /usr/share/ca-certificates/2951742.pem
	I1102 13:22:17.590129  315345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951742.pem
	I1102 13:22:17.631261  315345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951742.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 13:22:17.639482  315345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 13:22:17.648165  315345 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:22:17.651937  315345 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 13:13 /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:22:17.651991  315345 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 13:22:17.693705  315345 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 13:22:17.701581  315345 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1102 13:22:17.705312  315345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1102 13:22:17.746286  315345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1102 13:22:17.787279  315345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1102 13:22:17.829516  315345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1102 13:22:17.871197  315345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1102 13:22:17.912366  315345 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1102 13:22:17.953313  315345 kubeadm.go:401] StartCluster: {Name:functional-082350 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-082350 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:22:17.953397  315345 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 13:22:17.953459  315345 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:22:17.981651  315345 cri.go:89] found id: "115c0f321af18ce0d35db834fb5b580cba30d2b91cf3d857ba712ec72a169916"
	I1102 13:22:17.981663  315345 cri.go:89] found id: "195e8698107490572c208137af4965d48466a8e342844fd1d325f32114685c25"
	I1102 13:22:17.981667  315345 cri.go:89] found id: "e103ed229eb2be9f3df0c654e76dbf7f77ce98393a08ad90a6c3289555fc77c2"
	I1102 13:22:17.981669  315345 cri.go:89] found id: "1eaf3db5e858bb0f86d46fee3e4b12e98211c92be19252630516a1b3731f4340"
	I1102 13:22:17.981672  315345 cri.go:89] found id: "9e71c2142671b19306286ca00c273f3ffdd3d98e51dbf195f66492f22e8c95be"
	I1102 13:22:17.981675  315345 cri.go:89] found id: "d41c8358350da1eecc40a9da4d9206a6722d77b06bdc439f8a3f441740495d99"
	I1102 13:22:17.981677  315345 cri.go:89] found id: "035a8a81715a43c1c3615b6915b53e57d03cf826f008f7607bcf38aadecb4718"
	I1102 13:22:17.981679  315345 cri.go:89] found id: "8764d7989097ec326eecbf79457e481c2692ecbff0f3cff6bbdfd56f016c11af"
	I1102 13:22:17.981681  315345 cri.go:89] found id: ""
	I1102 13:22:17.981727  315345 ssh_runner.go:195] Run: sudo runc list -f json
	W1102 13:22:17.993756  315345 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:22:17Z" level=error msg="open /run/runc: no such file or directory"
	I1102 13:22:17.993834  315345 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1102 13:22:18.003033  315345 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1102 13:22:18.003045  315345 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1102 13:22:18.003114  315345 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1102 13:22:18.012278  315345 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1102 13:22:18.012785  315345 kubeconfig.go:125] found "functional-082350" server: "https://192.168.49.2:8441"
	I1102 13:22:18.014245  315345 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1102 13:22:18.022820  315345 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-11-02 13:20:25.143216411 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-11-02 13:22:17.140599452 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1102 13:22:18.022829  315345 kubeadm.go:1161] stopping kube-system containers ...
	I1102 13:22:18.022841  315345 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1102 13:22:18.022911  315345 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 13:22:18.052831  315345 cri.go:89] found id: "115c0f321af18ce0d35db834fb5b580cba30d2b91cf3d857ba712ec72a169916"
	I1102 13:22:18.052843  315345 cri.go:89] found id: "195e8698107490572c208137af4965d48466a8e342844fd1d325f32114685c25"
	I1102 13:22:18.052847  315345 cri.go:89] found id: "e103ed229eb2be9f3df0c654e76dbf7f77ce98393a08ad90a6c3289555fc77c2"
	I1102 13:22:18.052870  315345 cri.go:89] found id: "1eaf3db5e858bb0f86d46fee3e4b12e98211c92be19252630516a1b3731f4340"
	I1102 13:22:18.052873  315345 cri.go:89] found id: "9e71c2142671b19306286ca00c273f3ffdd3d98e51dbf195f66492f22e8c95be"
	I1102 13:22:18.052876  315345 cri.go:89] found id: "d41c8358350da1eecc40a9da4d9206a6722d77b06bdc439f8a3f441740495d99"
	I1102 13:22:18.052878  315345 cri.go:89] found id: "035a8a81715a43c1c3615b6915b53e57d03cf826f008f7607bcf38aadecb4718"
	I1102 13:22:18.052881  315345 cri.go:89] found id: "8764d7989097ec326eecbf79457e481c2692ecbff0f3cff6bbdfd56f016c11af"
	I1102 13:22:18.052883  315345 cri.go:89] found id: ""
	I1102 13:22:18.052888  315345 cri.go:252] Stopping containers: [115c0f321af18ce0d35db834fb5b580cba30d2b91cf3d857ba712ec72a169916 195e8698107490572c208137af4965d48466a8e342844fd1d325f32114685c25 e103ed229eb2be9f3df0c654e76dbf7f77ce98393a08ad90a6c3289555fc77c2 1eaf3db5e858bb0f86d46fee3e4b12e98211c92be19252630516a1b3731f4340 9e71c2142671b19306286ca00c273f3ffdd3d98e51dbf195f66492f22e8c95be d41c8358350da1eecc40a9da4d9206a6722d77b06bdc439f8a3f441740495d99 035a8a81715a43c1c3615b6915b53e57d03cf826f008f7607bcf38aadecb4718 8764d7989097ec326eecbf79457e481c2692ecbff0f3cff6bbdfd56f016c11af]
	I1102 13:22:18.052945  315345 ssh_runner.go:195] Run: which crictl
	I1102 13:22:18.056817  315345 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 115c0f321af18ce0d35db834fb5b580cba30d2b91cf3d857ba712ec72a169916 195e8698107490572c208137af4965d48466a8e342844fd1d325f32114685c25 e103ed229eb2be9f3df0c654e76dbf7f77ce98393a08ad90a6c3289555fc77c2 1eaf3db5e858bb0f86d46fee3e4b12e98211c92be19252630516a1b3731f4340 9e71c2142671b19306286ca00c273f3ffdd3d98e51dbf195f66492f22e8c95be d41c8358350da1eecc40a9da4d9206a6722d77b06bdc439f8a3f441740495d99 035a8a81715a43c1c3615b6915b53e57d03cf826f008f7607bcf38aadecb4718 8764d7989097ec326eecbf79457e481c2692ecbff0f3cff6bbdfd56f016c11af
	I1102 13:22:18.119621  315345 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1102 13:22:18.233898  315345 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1102 13:22:18.242067  315345 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Nov  2 13:20 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Nov  2 13:20 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 1972 Nov  2 13:20 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5588 Nov  2 13:20 /etc/kubernetes/scheduler.conf
	
	I1102 13:22:18.242129  315345 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1102 13:22:18.250702  315345 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1102 13:22:18.258609  315345 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1102 13:22:18.258759  315345 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1102 13:22:18.266379  315345 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1102 13:22:18.274431  315345 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1102 13:22:18.274484  315345 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1102 13:22:18.282155  315345 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1102 13:22:18.290250  315345 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1102 13:22:18.290309  315345 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1102 13:22:18.297949  315345 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1102 13:22:18.306123  315345 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1102 13:22:18.353387  315345 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1102 13:22:21.810779  315345 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (3.457367002s)
	I1102 13:22:21.810854  315345 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1102 13:22:22.031869  315345 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1102 13:22:22.098224  315345 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1102 13:22:22.166978  315345 api_server.go:52] waiting for apiserver process to appear ...
	I1102 13:22:22.167042  315345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 13:22:22.667243  315345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 13:22:23.167841  315345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 13:22:23.180558  315345 api_server.go:72] duration metric: took 1.013588799s to wait for apiserver process to appear ...
	I1102 13:22:23.180572  315345 api_server.go:88] waiting for apiserver healthz status ...
	I1102 13:22:23.180591  315345 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1102 13:22:27.317274  315345 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1102 13:22:27.317290  315345 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1102 13:22:27.317303  315345 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1102 13:22:27.369127  315345 api_server.go:279] https://192.168.49.2:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1102 13:22:27.369153  315345 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1102 13:22:27.681628  315345 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1102 13:22:27.689832  315345 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 13:22:27.689851  315345 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1102 13:22:28.180943  315345 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1102 13:22:28.198994  315345 api_server.go:279] https://192.168.49.2:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 13:22:28.199014  315345 api_server.go:103] status: https://192.168.49.2:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1102 13:22:28.681301  315345 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1102 13:22:28.689562  315345 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1102 13:22:28.704051  315345 api_server.go:141] control plane version: v1.34.1
	I1102 13:22:28.704070  315345 api_server.go:131] duration metric: took 5.523491938s to wait for apiserver health ...
	I1102 13:22:28.704078  315345 cni.go:84] Creating CNI manager for ""
	I1102 13:22:28.704084  315345 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:22:28.707609  315345 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1102 13:22:28.710722  315345 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1102 13:22:28.715461  315345 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1102 13:22:28.715473  315345 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1102 13:22:28.736034  315345 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1102 13:22:29.226063  315345 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 13:22:29.230387  315345 system_pods.go:59] 8 kube-system pods found
	I1102 13:22:29.230410  315345 system_pods.go:61] "coredns-66bc5c9577-zpdww" [7bae1ef4-f3c1-43da-a324-fdb6a9b5aa6e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:22:29.230419  315345 system_pods.go:61] "etcd-functional-082350" [a4f46905-958d-40ad-9a3e-6d8acabfe3a2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 13:22:29.230448  315345 system_pods.go:61] "kindnet-8k9vq" [3a4f976a-95af-41b0-a111-4179ad225c7a] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1102 13:22:29.230456  315345 system_pods.go:61] "kube-apiserver-functional-082350" [540f7a73-530a-44a2-8fa5-3e10ee205ebd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 13:22:29.230473  315345 system_pods.go:61] "kube-controller-manager-functional-082350" [136a0658-1ac2-4611-bd62-5e223a7537f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 13:22:29.230479  315345 system_pods.go:61] "kube-proxy-nvhn8" [f3e1b2f8-6cef-4aa5-acaf-714a192d77e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1102 13:22:29.230486  315345 system_pods.go:61] "kube-scheduler-functional-082350" [72056460-3423-4411-810e-84d71a706145] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 13:22:29.230495  315345 system_pods.go:61] "storage-provisioner" [f7fa5507-74d1-40ec-b17e-a609a9ace2a8] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 13:22:29.230500  315345 system_pods.go:74] duration metric: took 4.426698ms to wait for pod list to return data ...
	I1102 13:22:29.230506  315345 node_conditions.go:102] verifying NodePressure condition ...
	I1102 13:22:29.236637  315345 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1102 13:22:29.236660  315345 node_conditions.go:123] node cpu capacity is 2
	I1102 13:22:29.236671  315345 node_conditions.go:105] duration metric: took 6.160996ms to run NodePressure ...
	I1102 13:22:29.236730  315345 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1102 13:22:29.599701  315345 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1102 13:22:29.603834  315345 kubeadm.go:744] kubelet initialised
	I1102 13:22:29.603847  315345 kubeadm.go:745] duration metric: took 4.132679ms waiting for restarted kubelet to initialise ...
	I1102 13:22:29.603862  315345 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1102 13:22:29.613441  315345 ops.go:34] apiserver oom_adj: -16
	I1102 13:22:29.613454  315345 kubeadm.go:602] duration metric: took 11.610403302s to restartPrimaryControlPlane
	I1102 13:22:29.613462  315345 kubeadm.go:403] duration metric: took 11.660159691s to StartCluster
	I1102 13:22:29.613476  315345 settings.go:142] acquiring lock: {Name:mk95f66b3b15e63f58f8c9085c1ffe67cc396dc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:22:29.613551  315345 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-293314/kubeconfig
	I1102 13:22:29.614166  315345 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/kubeconfig: {Name:mke5a65554da8fc0fd6a2ea60bed899d5b38ce09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:22:29.614422  315345 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 13:22:29.614689  315345 config.go:182] Loaded profile config "functional-082350": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:22:29.614739  315345 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1102 13:22:29.614813  315345 addons.go:70] Setting storage-provisioner=true in profile "functional-082350"
	I1102 13:22:29.614826  315345 addons.go:239] Setting addon storage-provisioner=true in "functional-082350"
	W1102 13:22:29.614831  315345 addons.go:248] addon storage-provisioner should already be in state true
	I1102 13:22:29.614908  315345 host.go:66] Checking if "functional-082350" exists ...
	I1102 13:22:29.614923  315345 addons.go:70] Setting default-storageclass=true in profile "functional-082350"
	I1102 13:22:29.614937  315345 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-082350"
	I1102 13:22:29.615235  315345 cli_runner.go:164] Run: docker container inspect functional-082350 --format={{.State.Status}}
	I1102 13:22:29.615397  315345 cli_runner.go:164] Run: docker container inspect functional-082350 --format={{.State.Status}}
	I1102 13:22:29.617748  315345 out.go:179] * Verifying Kubernetes components...
	I1102 13:22:29.620759  315345 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 13:22:29.655271  315345 addons.go:239] Setting addon default-storageclass=true in "functional-082350"
	W1102 13:22:29.655284  315345 addons.go:248] addon default-storageclass should already be in state true
	I1102 13:22:29.655308  315345 host.go:66] Checking if "functional-082350" exists ...
	I1102 13:22:29.655719  315345 cli_runner.go:164] Run: docker container inspect functional-082350 --format={{.State.Status}}
	I1102 13:22:29.658056  315345 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 13:22:29.660982  315345 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:22:29.660993  315345 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1102 13:22:29.661062  315345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-082350
	I1102 13:22:29.686369  315345 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1102 13:22:29.686382  315345 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1102 13:22:29.686448  315345 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-082350
	I1102 13:22:29.709313  315345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/functional-082350/id_rsa Username:docker}
	I1102 13:22:29.730087  315345 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/functional-082350/id_rsa Username:docker}
	I1102 13:22:29.850927  315345 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 13:22:29.866772  315345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 13:22:29.867520  315345 node_ready.go:35] waiting up to 6m0s for node "functional-082350" to be "Ready" ...
	I1102 13:22:29.870924  315345 node_ready.go:49] node "functional-082350" is "Ready"
	I1102 13:22:29.870941  315345 node_ready.go:38] duration metric: took 3.404719ms for node "functional-082350" to be "Ready" ...
	I1102 13:22:29.870953  315345 api_server.go:52] waiting for apiserver process to appear ...
	I1102 13:22:29.871014  315345 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 13:22:29.906578  315345 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1102 13:22:30.930688  315345 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.063893474s)
	I1102 13:22:30.930725  315345 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.059702849s)
	I1102 13:22:30.930736  315345 api_server.go:72] duration metric: took 1.316294768s to wait for apiserver process to appear ...
	I1102 13:22:30.930740  315345 api_server.go:88] waiting for apiserver healthz status ...
	I1102 13:22:30.930756  315345 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
	I1102 13:22:30.931071  315345 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.024478481s)
	I1102 13:22:30.940969  315345 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
	ok
	I1102 13:22:30.942108  315345 api_server.go:141] control plane version: v1.34.1
	I1102 13:22:30.942121  315345 api_server.go:131] duration metric: took 11.377005ms to wait for apiserver health ...
	I1102 13:22:30.942129  315345 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 13:22:30.945604  315345 system_pods.go:59] 8 kube-system pods found
	I1102 13:22:30.945620  315345 system_pods.go:61] "coredns-66bc5c9577-zpdww" [7bae1ef4-f3c1-43da-a324-fdb6a9b5aa6e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:22:30.945629  315345 system_pods.go:61] "etcd-functional-082350" [a4f46905-958d-40ad-9a3e-6d8acabfe3a2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 13:22:30.945634  315345 system_pods.go:61] "kindnet-8k9vq" [3a4f976a-95af-41b0-a111-4179ad225c7a] Running
	I1102 13:22:30.945640  315345 system_pods.go:61] "kube-apiserver-functional-082350" [540f7a73-530a-44a2-8fa5-3e10ee205ebd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 13:22:30.945645  315345 system_pods.go:61] "kube-controller-manager-functional-082350" [136a0658-1ac2-4611-bd62-5e223a7537f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 13:22:30.945655  315345 system_pods.go:61] "kube-proxy-nvhn8" [f3e1b2f8-6cef-4aa5-acaf-714a192d77e2] Running
	I1102 13:22:30.945660  315345 system_pods.go:61] "kube-scheduler-functional-082350" [72056460-3423-4411-810e-84d71a706145] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 13:22:30.945663  315345 system_pods.go:61] "storage-provisioner" [f7fa5507-74d1-40ec-b17e-a609a9ace2a8] Running
	I1102 13:22:30.945669  315345 system_pods.go:74] duration metric: took 3.534715ms to wait for pod list to return data ...
	I1102 13:22:30.945675  315345 default_sa.go:34] waiting for default service account to be created ...
	I1102 13:22:30.946559  315345 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1102 13:22:30.948611  315345 default_sa.go:45] found service account: "default"
	I1102 13:22:30.948627  315345 default_sa.go:55] duration metric: took 2.943996ms for default service account to be created ...
	I1102 13:22:30.948635  315345 system_pods.go:116] waiting for k8s-apps to be running ...
	I1102 13:22:30.949423  315345 addons.go:515] duration metric: took 1.334678827s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1102 13:22:30.951383  315345 system_pods.go:86] 8 kube-system pods found
	I1102 13:22:30.951399  315345 system_pods.go:89] "coredns-66bc5c9577-zpdww" [7bae1ef4-f3c1-43da-a324-fdb6a9b5aa6e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 13:22:30.951409  315345 system_pods.go:89] "etcd-functional-082350" [a4f46905-958d-40ad-9a3e-6d8acabfe3a2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 13:22:30.951413  315345 system_pods.go:89] "kindnet-8k9vq" [3a4f976a-95af-41b0-a111-4179ad225c7a] Running
	I1102 13:22:30.951419  315345 system_pods.go:89] "kube-apiserver-functional-082350" [540f7a73-530a-44a2-8fa5-3e10ee205ebd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 13:22:30.951424  315345 system_pods.go:89] "kube-controller-manager-functional-082350" [136a0658-1ac2-4611-bd62-5e223a7537f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 13:22:30.951428  315345 system_pods.go:89] "kube-proxy-nvhn8" [f3e1b2f8-6cef-4aa5-acaf-714a192d77e2] Running
	I1102 13:22:30.951433  315345 system_pods.go:89] "kube-scheduler-functional-082350" [72056460-3423-4411-810e-84d71a706145] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 13:22:30.951436  315345 system_pods.go:89] "storage-provisioner" [f7fa5507-74d1-40ec-b17e-a609a9ace2a8] Running
	I1102 13:22:30.951441  315345 system_pods.go:126] duration metric: took 2.801808ms to wait for k8s-apps to be running ...
	I1102 13:22:30.951446  315345 system_svc.go:44] waiting for kubelet service to be running ....
	I1102 13:22:30.951501  315345 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:22:30.964671  315345 system_svc.go:56] duration metric: took 13.212186ms WaitForService to wait for kubelet
	I1102 13:22:30.964690  315345 kubeadm.go:587] duration metric: took 1.35024698s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 13:22:30.964708  315345 node_conditions.go:102] verifying NodePressure condition ...
	I1102 13:22:30.967939  315345 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1102 13:22:30.967954  315345 node_conditions.go:123] node cpu capacity is 2
	I1102 13:22:30.967964  315345 node_conditions.go:105] duration metric: took 3.252242ms to run NodePressure ...
	I1102 13:22:30.967975  315345 start.go:242] waiting for startup goroutines ...
	I1102 13:22:30.967982  315345 start.go:247] waiting for cluster config update ...
	I1102 13:22:30.967991  315345 start.go:256] writing updated cluster config ...
	I1102 13:22:30.968310  315345 ssh_runner.go:195] Run: rm -f paused
	I1102 13:22:30.972405  315345 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:22:30.978020  315345 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zpdww" in "kube-system" namespace to be "Ready" or be gone ...
	W1102 13:22:32.983539  315345 pod_ready.go:104] pod "coredns-66bc5c9577-zpdww" is not "Ready", error: <nil>
	W1102 13:22:35.483268  315345 pod_ready.go:104] pod "coredns-66bc5c9577-zpdww" is not "Ready", error: <nil>
	W1102 13:22:37.484274  315345 pod_ready.go:104] pod "coredns-66bc5c9577-zpdww" is not "Ready", error: <nil>
	I1102 13:22:38.483520  315345 pod_ready.go:94] pod "coredns-66bc5c9577-zpdww" is "Ready"
	I1102 13:22:38.483536  315345 pod_ready.go:86] duration metric: took 7.505502304s for pod "coredns-66bc5c9577-zpdww" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:22:38.486324  315345 pod_ready.go:83] waiting for pod "etcd-functional-082350" in "kube-system" namespace to be "Ready" or be gone ...
	W1102 13:22:40.492424  315345 pod_ready.go:104] pod "etcd-functional-082350" is not "Ready", error: <nil>
	I1102 13:22:40.991949  315345 pod_ready.go:94] pod "etcd-functional-082350" is "Ready"
	I1102 13:22:40.991963  315345 pod_ready.go:86] duration metric: took 2.505627807s for pod "etcd-functional-082350" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:22:40.994664  315345 pod_ready.go:83] waiting for pod "kube-apiserver-functional-082350" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:22:40.999944  315345 pod_ready.go:94] pod "kube-apiserver-functional-082350" is "Ready"
	I1102 13:22:40.999959  315345 pod_ready.go:86] duration metric: took 5.281929ms for pod "kube-apiserver-functional-082350" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:22:41.003028  315345 pod_ready.go:83] waiting for pod "kube-controller-manager-functional-082350" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:22:41.009169  315345 pod_ready.go:94] pod "kube-controller-manager-functional-082350" is "Ready"
	I1102 13:22:41.009184  315345 pod_ready.go:86] duration metric: took 6.142354ms for pod "kube-controller-manager-functional-082350" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:22:41.011561  315345 pod_ready.go:83] waiting for pod "kube-proxy-nvhn8" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:22:41.281097  315345 pod_ready.go:94] pod "kube-proxy-nvhn8" is "Ready"
	I1102 13:22:41.281110  315345 pod_ready.go:86] duration metric: took 269.537715ms for pod "kube-proxy-nvhn8" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:22:41.480916  315345 pod_ready.go:83] waiting for pod "kube-scheduler-functional-082350" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:22:41.881641  315345 pod_ready.go:94] pod "kube-scheduler-functional-082350" is "Ready"
	I1102 13:22:41.881654  315345 pod_ready.go:86] duration metric: took 400.725701ms for pod "kube-scheduler-functional-082350" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 13:22:41.881664  315345 pod_ready.go:40] duration metric: took 10.909238631s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 13:22:41.939783  315345 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1102 13:22:41.943144  315345 out.go:179] * Done! kubectl is now configured to use "functional-082350" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 02 13:23:15 functional-082350 crio[3586]: time="2025-11-02T13:23:15.203574692Z" level=info msg="Got pod network &{Name:hello-node-75c85bcc94-tn9zm Namespace:default ID:0dba72e722326bc4077910cd24bf5962d5fc2c8946b6c36cd124a38500af31b3 UID:ac9a69d9-7322-42db-8ba8-05fb4ae7a680 NetNS:/var/run/netns/68bbbe68-aa28-4aae-a3d2-6969fd5fda7a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4001fef690}] Aliases:map[]}"
	Nov 02 13:23:15 functional-082350 crio[3586]: time="2025-11-02T13:23:15.203724553Z" level=info msg="Checking pod default_hello-node-75c85bcc94-tn9zm for CNI network kindnet (type=ptp)"
	Nov 02 13:23:15 functional-082350 crio[3586]: time="2025-11-02T13:23:15.207848593Z" level=info msg="Ran pod sandbox 0dba72e722326bc4077910cd24bf5962d5fc2c8946b6c36cd124a38500af31b3 with infra container: default/hello-node-75c85bcc94-tn9zm/POD" id=4a6a7cb2-fc9b-4394-bd50-ac4e84d3ebc7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 13:23:15 functional-082350 crio[3586]: time="2025-11-02T13:23:15.20939962Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=208eea88-1b68-4956-9888-7f3235919c0f name=/runtime.v1.ImageService/PullImage
	Nov 02 13:23:22 functional-082350 crio[3586]: time="2025-11-02T13:23:22.153132306Z" level=info msg="Stopping pod sandbox: 0b8058c1c7203420fdbdb4c79491e2a2663b2bc7e8cdf59fc870b32069738379" id=fd69af76-1d53-412c-842e-b4e1d530493f name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 02 13:23:22 functional-082350 crio[3586]: time="2025-11-02T13:23:22.153192877Z" level=info msg="Stopped pod sandbox (already stopped): 0b8058c1c7203420fdbdb4c79491e2a2663b2bc7e8cdf59fc870b32069738379" id=fd69af76-1d53-412c-842e-b4e1d530493f name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 02 13:23:22 functional-082350 crio[3586]: time="2025-11-02T13:23:22.153786567Z" level=info msg="Removing pod sandbox: 0b8058c1c7203420fdbdb4c79491e2a2663b2bc7e8cdf59fc870b32069738379" id=3d29d544-f1d1-41e7-9dd8-856f3bb9dce4 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 02 13:23:22 functional-082350 crio[3586]: time="2025-11-02T13:23:22.157323629Z" level=info msg="Removed pod sandbox: 0b8058c1c7203420fdbdb4c79491e2a2663b2bc7e8cdf59fc870b32069738379" id=3d29d544-f1d1-41e7-9dd8-856f3bb9dce4 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 02 13:23:22 functional-082350 crio[3586]: time="2025-11-02T13:23:22.157853982Z" level=info msg="Stopping pod sandbox: 42b67f7c21f1ccffd2f852a08da189060cbc4abaa906425f2314c21a3f23a49f" id=fc7140d9-8b21-4ab0-8139-1631e266b4d9 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 02 13:23:22 functional-082350 crio[3586]: time="2025-11-02T13:23:22.157896428Z" level=info msg="Stopped pod sandbox (already stopped): 42b67f7c21f1ccffd2f852a08da189060cbc4abaa906425f2314c21a3f23a49f" id=fc7140d9-8b21-4ab0-8139-1631e266b4d9 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 02 13:23:22 functional-082350 crio[3586]: time="2025-11-02T13:23:22.158201851Z" level=info msg="Removing pod sandbox: 42b67f7c21f1ccffd2f852a08da189060cbc4abaa906425f2314c21a3f23a49f" id=f17a1079-dd3b-498c-b442-426c4f08be98 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 02 13:23:22 functional-082350 crio[3586]: time="2025-11-02T13:23:22.161642338Z" level=info msg="Removed pod sandbox: 42b67f7c21f1ccffd2f852a08da189060cbc4abaa906425f2314c21a3f23a49f" id=f17a1079-dd3b-498c-b442-426c4f08be98 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 02 13:23:22 functional-082350 crio[3586]: time="2025-11-02T13:23:22.162278949Z" level=info msg="Stopping pod sandbox: e8ce6c49b6f051a64e71efb47592e04889bd8806dfa0d935a3cd9f0404d92412" id=2fa3c0d9-f587-4887-a7f0-16f87b658d55 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 02 13:23:22 functional-082350 crio[3586]: time="2025-11-02T13:23:22.16232215Z" level=info msg="Stopped pod sandbox (already stopped): e8ce6c49b6f051a64e71efb47592e04889bd8806dfa0d935a3cd9f0404d92412" id=2fa3c0d9-f587-4887-a7f0-16f87b658d55 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 02 13:23:22 functional-082350 crio[3586]: time="2025-11-02T13:23:22.162680005Z" level=info msg="Removing pod sandbox: e8ce6c49b6f051a64e71efb47592e04889bd8806dfa0d935a3cd9f0404d92412" id=a85d31a4-f75d-4cbf-9391-9bb6f7a888ba name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 02 13:23:22 functional-082350 crio[3586]: time="2025-11-02T13:23:22.166024761Z" level=info msg="Removed pod sandbox: e8ce6c49b6f051a64e71efb47592e04889bd8806dfa0d935a3cd9f0404d92412" id=a85d31a4-f75d-4cbf-9391-9bb6f7a888ba name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 02 13:23:27 functional-082350 crio[3586]: time="2025-11-02T13:23:27.179943089Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=1ec0ae28-1462-4e09-9d94-defdfc29f018 name=/runtime.v1.ImageService/PullImage
	Nov 02 13:23:43 functional-082350 crio[3586]: time="2025-11-02T13:23:43.179306793Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=6a84bdc7-6554-4dde-a6e0-966dfd2fdc2b name=/runtime.v1.ImageService/PullImage
	Nov 02 13:23:50 functional-082350 crio[3586]: time="2025-11-02T13:23:50.179653111Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=34585dfe-f76c-4f75-a5d0-e8e650e7d11f name=/runtime.v1.ImageService/PullImage
	Nov 02 13:24:24 functional-082350 crio[3586]: time="2025-11-02T13:24:24.179304287Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=317ba75e-b043-4027-9393-c55af753f71c name=/runtime.v1.ImageService/PullImage
	Nov 02 13:24:31 functional-082350 crio[3586]: time="2025-11-02T13:24:31.179535479Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=73d994f9-7649-4eaf-8cd7-1ae267d222c5 name=/runtime.v1.ImageService/PullImage
	Nov 02 13:25:54 functional-082350 crio[3586]: time="2025-11-02T13:25:54.178893856Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=c746f5a8-ab6c-4f84-8b08-3593da302ea0 name=/runtime.v1.ImageService/PullImage
	Nov 02 13:26:02 functional-082350 crio[3586]: time="2025-11-02T13:26:02.180666063Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=83391436-514b-41ac-ac51-2d5811dc0048 name=/runtime.v1.ImageService/PullImage
	Nov 02 13:28:37 functional-082350 crio[3586]: time="2025-11-02T13:28:37.180026672Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=a53e2d76-59fb-478b-b301-2a9effcba9db name=/runtime.v1.ImageService/PullImage
	Nov 02 13:28:56 functional-082350 crio[3586]: time="2025-11-02T13:28:56.179565216Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=4bd3b1d8-4cc6-4db0-86e4-6cb898b8bcab name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                             CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	4961511f3b425       docker.io/library/nginx@sha256:89a1bafe028b2980994d974115ee7268ef851a6eb7c9cb9626d8035b08ba4424   9 minutes ago       Running             myfrontend                0                   1992136df0dd4       sp-pod                                      default
	034df20beba53       docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90   10 minutes ago      Running             nginx                     0                   b35906815c6ff       nginx-svc                                   default
	546c3c5062e2d       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  10 minutes ago      Running             kube-proxy                2                   dd2be59c3358b       kube-proxy-nvhn8                            kube-system
	1cb292b9f03cc       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  10 minutes ago      Running             storage-provisioner       2                   f0883540aa230       storage-provisioner                         kube-system
	1c6fdd8f339c2       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  10 minutes ago      Running             coredns                   2                   33600f28e32e6       coredns-66bc5c9577-zpdww                    kube-system
	fe586de9dd792       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  10 minutes ago      Running             kindnet-cni               2                   ba567e03594b1       kindnet-8k9vq                               kube-system
	38756bffca169       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                  10 minutes ago      Running             kube-apiserver            0                   b6de636a2ebd6       kube-apiserver-functional-082350            kube-system
	bbce63e1d6c2c       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  10 minutes ago      Running             kube-controller-manager   2                   7cc9a3f40af67       kube-controller-manager-functional-082350   kube-system
	571f480a3ab8e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  10 minutes ago      Running             etcd                      2                   50dc29095913a       etcd-functional-082350                      kube-system
	0ad9e5fb31d77       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  10 minutes ago      Running             kube-scheduler            2                   2385dd4669491       kube-scheduler-functional-082350            kube-system
	115c0f321af18       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                  11 minutes ago      Exited              etcd                      1                   50dc29095913a       etcd-functional-082350                      kube-system
	195e869810749       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                  11 minutes ago      Exited              coredns                   1                   33600f28e32e6       coredns-66bc5c9577-zpdww                    kube-system
	e103ed229eb2b       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                  11 minutes ago      Exited              storage-provisioner       1                   f0883540aa230       storage-provisioner                         kube-system
	9e71c2142671b       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                  11 minutes ago      Exited              kindnet-cni               1                   ba567e03594b1       kindnet-8k9vq                               kube-system
	d41c8358350da       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                  11 minutes ago      Exited              kube-proxy                1                   dd2be59c3358b       kube-proxy-nvhn8                            kube-system
	035a8a81715a4       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                  11 minutes ago      Exited              kube-scheduler            1                   2385dd4669491       kube-scheduler-functional-082350            kube-system
	8764d7989097e       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                  11 minutes ago      Exited              kube-controller-manager   1                   7cc9a3f40af67       kube-controller-manager-functional-082350   kube-system
	
	
	==> coredns [195e8698107490572c208137af4965d48466a8e342844fd1d325f32114685c25] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43242 - 47433 "HINFO IN 1606594269226387486.4565929132393826198. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.019821793s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [1c6fdd8f339c2d76ee9d90ba8b315ea21e20a714c28f65acafa79b8f6f66f783] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35874 - 61629 "HINFO IN 4949042655534365393.6561072679061078281. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021131793s
	
	
	==> describe nodes <==
	Name:               functional-082350
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-082350
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a
	                    minikube.k8s.io/name=functional-082350
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_02T13_20_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 02 Nov 2025 13:20:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-082350
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 02 Nov 2025 13:32:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 02 Nov 2025 13:31:58 +0000   Sun, 02 Nov 2025 13:20:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 02 Nov 2025 13:31:58 +0000   Sun, 02 Nov 2025 13:20:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 02 Nov 2025 13:31:58 +0000   Sun, 02 Nov 2025 13:20:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 02 Nov 2025 13:31:58 +0000   Sun, 02 Nov 2025 13:21:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-082350
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                861b6685-0bf2-440c-8c20-3525e99be779
	  Boot ID:                    4c303cf4-c0b6-4c0d-aa1a-d7509feb15e7
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-tn9zm                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m47s
	  default                     hello-node-connect-7d85dfc575-skmb2          0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m53s
	  kube-system                 coredns-66bc5c9577-zpdww                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-functional-082350                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kindnet-8k9vq                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-082350             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-082350    200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-nvhn8                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-082350             100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-082350 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-082350 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-082350 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientPID     12m                kubelet          Node functional-082350 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 12m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node functional-082350 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node functional-082350 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                node-controller  Node functional-082350 event: Registered Node functional-082350 in Controller
	  Normal   NodeReady                11m                kubelet          Node functional-082350 status is now: NodeReady
	  Normal   RegisteredNode           11m                node-controller  Node functional-082350 event: Registered Node functional-082350 in Controller
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-082350 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-082350 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-082350 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node functional-082350 event: Registered Node functional-082350 in Controller
	
	
	==> dmesg <==
	[Nov 2 11:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.015966] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510742] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.034359] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.787410] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.238409] kauditd_printk_skb: 36 callbacks suppressed
	[Nov 2 13:12] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 2 13:13] overlayfs: idmapped layers are currently not supported
	[  +0.073328] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Nov 2 13:19] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:20] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [115c0f321af18ce0d35db834fb5b580cba30d2b91cf3d857ba712ec72a169916] <==
	{"level":"warn","ts":"2025-11-02T13:21:44.231373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:21:44.251719Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:21:44.275438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:21:44.301616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:21:44.318454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:21:44.338346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:21:44.430853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38520","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-02T13:22:09.571215Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-02T13:22:09.571260Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-082350","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-02T13:22:09.571341Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-02T13:22:09.720051Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-02T13:22:09.721515Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-02T13:22:09.721573Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-11-02T13:22:09.721627Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-02T13:22:09.721644Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-02T13:22:09.721678Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-02T13:22:09.721744Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-02T13:22:09.721781Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-02T13:22:09.721871Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-02T13:22:09.721894Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-02T13:22:09.721902Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-02T13:22:09.725550Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-02T13:22:09.725634Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-02T13:22:09.725701Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-02T13:22:09.725727Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-082350","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [571f480a3ab8ea7c755229abcd54df5355aea58cb1f9705b976d12b068f049af] <==
	{"level":"warn","ts":"2025-11-02T13:22:25.782488Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:22:25.832346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:22:25.838704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:22:25.895340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:22:25.929535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:22:25.949864Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:22:26.004780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:22:26.032826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:22:26.059235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39884","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:22:26.091894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:22:26.119242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:22:26.138281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:22:26.171822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:22:26.199967Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:22:26.230538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:22:26.254026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:22:26.282832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:22:26.320630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:22:26.365374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:22:26.408706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:22:26.435079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T13:22:26.529060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40134","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-02T13:32:24.667775Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1129}
	{"level":"info","ts":"2025-11-02T13:32:24.691985Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1129,"took":"23.815081ms","hash":1904926348,"current-db-size-bytes":3317760,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1458176,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-11-02T13:32:24.692042Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1904926348,"revision":1129,"compact-revision":-1}
	
	
	==> kernel <==
	 13:33:02 up  2:15,  0 user,  load average: 0.19, 0.31, 1.45
	Linux functional-082350 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9e71c2142671b19306286ca00c273f3ffdd3d98e51dbf195f66492f22e8c95be] <==
	I1102 13:21:40.855675       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1102 13:21:40.856771       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1102 13:21:40.910760       1 main.go:148] setting mtu 1500 for CNI 
	I1102 13:21:40.910795       1 main.go:178] kindnetd IP family: "ipv4"
	I1102 13:21:40.910809       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-02T13:21:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1102 13:21:41.142294       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1102 13:21:41.142312       1 controller.go:381] "Waiting for informer caches to sync"
	I1102 13:21:41.142320       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1102 13:21:41.143057       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1102 13:21:45.653428       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1102 13:21:45.653462       1 metrics.go:72] Registering metrics
	I1102 13:21:45.653520       1 controller.go:711] "Syncing nftables rules"
	I1102 13:21:51.142698       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:21:51.142826       1 main.go:301] handling current node
	I1102 13:22:01.142830       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:22:01.142871       1 main.go:301] handling current node
	
	
	==> kindnet [fe586de9dd792b77f8524ca8bd81dfe7140756ad8a14284282de9d795f693bd9] <==
	I1102 13:30:58.825659       1 main.go:301] handling current node
	I1102 13:31:08.819685       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:31:08.819728       1 main.go:301] handling current node
	I1102 13:31:18.819011       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:31:18.819049       1 main.go:301] handling current node
	I1102 13:31:28.819514       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:31:28.819550       1 main.go:301] handling current node
	I1102 13:31:38.819648       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:31:38.819767       1 main.go:301] handling current node
	I1102 13:31:48.819314       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:31:48.819346       1 main.go:301] handling current node
	I1102 13:31:58.819728       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:31:58.819762       1 main.go:301] handling current node
	I1102 13:32:08.826499       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:32:08.826535       1 main.go:301] handling current node
	I1102 13:32:18.820100       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:32:18.820201       1 main.go:301] handling current node
	I1102 13:32:28.819680       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:32:28.819786       1 main.go:301] handling current node
	I1102 13:32:38.820657       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:32:38.820831       1 main.go:301] handling current node
	I1102 13:32:48.827413       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:32:48.827450       1 main.go:301] handling current node
	I1102 13:32:58.820198       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1102 13:32:58.820307       1 main.go:301] handling current node
	
	
	==> kube-apiserver [38756bffca16971c50a7d5b239c7a03542a9fba3d4a2dfb4287f26f2c2c137da] <==
	I1102 13:22:27.442194       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1102 13:22:27.442705       1 aggregator.go:171] initial CRD sync complete...
	I1102 13:22:27.442772       1 autoregister_controller.go:144] Starting autoregister controller
	I1102 13:22:27.442802       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1102 13:22:27.442831       1 cache.go:39] Caches are synced for autoregister controller
	I1102 13:22:27.486200       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1102 13:22:27.508052       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1102 13:22:27.510001       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 13:22:28.194180       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1102 13:22:28.223961       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1102 13:22:29.218875       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1102 13:22:29.476296       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1102 13:22:29.572119       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1102 13:22:29.587132       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1102 13:22:30.749376       1 controller.go:667] quota admission added evaluator for: endpoints
	I1102 13:22:30.886356       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1102 13:22:31.037488       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1102 13:22:45.448908       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.99.244.145"}
	I1102 13:22:50.879321       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.101.69.142"}
	I1102 13:22:59.549893       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.101.33.149"}
	E1102 13:23:07.787032       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:40182: use of closed network connection
	E1102 13:23:08.460730       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	E1102 13:23:14.760158       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:51594: use of closed network connection
	I1102 13:23:14.963818       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.97.78"}
	I1102 13:32:27.380531       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [8764d7989097ec326eecbf79457e481c2692ecbff0f3cff6bbdfd56f016c11af] <==
	I1102 13:21:48.784443       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1102 13:21:48.784507       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1102 13:21:48.784528       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1102 13:21:48.784542       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1102 13:21:48.784549       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1102 13:21:48.784641       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1102 13:21:48.787190       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 13:21:48.787191       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1102 13:21:48.787286       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1102 13:21:48.791638       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1102 13:21:48.791749       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1102 13:21:48.793332       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1102 13:21:48.796645       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1102 13:21:48.798858       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1102 13:21:48.802113       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1102 13:21:48.805346       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1102 13:21:48.819781       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 13:21:48.826693       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1102 13:21:48.826701       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1102 13:21:48.826717       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1102 13:21:48.826727       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1102 13:21:48.826749       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1102 13:21:48.826760       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1102 13:21:48.827869       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1102 13:21:48.832392       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-controller-manager [bbce63e1d6c2c7bbca08e27af372be65e3f5a44ba95b8991777af123977ab6c9] <==
	I1102 13:22:30.758719       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1102 13:22:30.763461       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1102 13:22:30.763601       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1102 13:22:30.768833       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1102 13:22:30.775194       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1102 13:22:30.778844       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1102 13:22:30.781359       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1102 13:22:30.781405       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1102 13:22:30.781543       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 13:22:30.781557       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1102 13:22:30.781564       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1102 13:22:30.788750       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1102 13:22:30.788896       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1102 13:22:30.788980       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-082350"
	I1102 13:22:30.789032       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1102 13:22:30.789086       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1102 13:22:30.789110       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1102 13:22:30.789650       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1102 13:22:30.791925       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 13:22:30.795753       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1102 13:22:30.796506       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1102 13:22:30.798474       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1102 13:22:30.798488       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1102 13:22:30.798497       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1102 13:22:30.800994       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [546c3c5062e2d10b397dc326bf158597da5d8ecfe85c68a89041e471e31c23cd] <==
	I1102 13:22:28.745888       1 server_linux.go:53] "Using iptables proxy"
	I1102 13:22:28.862748       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1102 13:22:28.963082       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1102 13:22:28.963128       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1102 13:22:28.963200       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1102 13:22:28.997565       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 13:22:28.997682       1 server_linux.go:132] "Using iptables Proxier"
	I1102 13:22:29.003822       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1102 13:22:29.004201       1 server.go:527] "Version info" version="v1.34.1"
	I1102 13:22:29.004689       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 13:22:29.005970       1 config.go:200] "Starting service config controller"
	I1102 13:22:29.005993       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1102 13:22:29.006008       1 config.go:106] "Starting endpoint slice config controller"
	I1102 13:22:29.006012       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1102 13:22:29.006027       1 config.go:403] "Starting serviceCIDR config controller"
	I1102 13:22:29.006031       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1102 13:22:29.008039       1 config.go:309] "Starting node config controller"
	I1102 13:22:29.008061       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1102 13:22:29.008069       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1102 13:22:29.106444       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1102 13:22:29.106483       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1102 13:22:29.106495       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [d41c8358350da1eecc40a9da4d9206a6722d77b06bdc439f8a3f441740495d99] <==
	I1102 13:21:42.554580       1 server_linux.go:53] "Using iptables proxy"
	I1102 13:21:43.981253       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1102 13:21:45.648116       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1102 13:21:45.691279       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1102 13:21:45.695556       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1102 13:21:45.794399       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 13:21:45.794458       1 server_linux.go:132] "Using iptables Proxier"
	I1102 13:21:45.839906       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1102 13:21:45.840239       1 server.go:527] "Version info" version="v1.34.1"
	I1102 13:21:45.840261       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 13:21:45.850778       1 config.go:106] "Starting endpoint slice config controller"
	I1102 13:21:45.850803       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1102 13:21:45.851129       1 config.go:200] "Starting service config controller"
	I1102 13:21:45.851136       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1102 13:21:45.851458       1 config.go:403] "Starting serviceCIDR config controller"
	I1102 13:21:45.851466       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1102 13:21:45.851892       1 config.go:309] "Starting node config controller"
	I1102 13:21:45.851900       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1102 13:21:45.851906       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1102 13:21:45.951544       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1102 13:21:45.951584       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1102 13:21:45.951630       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [035a8a81715a43c1c3615b6915b53e57d03cf826f008f7607bcf38aadecb4718] <==
	I1102 13:21:43.011910       1 serving.go:386] Generated self-signed cert in-memory
	W1102 13:21:45.390930       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1102 13:21:45.390958       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1102 13:21:45.390968       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1102 13:21:45.390976       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1102 13:21:45.584351       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1102 13:21:45.584384       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 13:21:45.601627       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 13:21:45.601670       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 13:21:45.602521       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1102 13:21:45.618238       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1102 13:21:45.703180       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 13:22:09.566697       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1102 13:22:09.566800       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1102 13:22:09.566811       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1102 13:22:09.566831       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 13:22:09.567051       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1102 13:22:09.567087       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [0ad9e5fb31d778fc7566b9fa05abb6d592bd7672ec2fd91c1cf31c6b31c69950] <==
	I1102 13:22:24.031806       1 serving.go:386] Generated self-signed cert in-memory
	W1102 13:22:27.322993       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1102 13:22:27.323037       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1102 13:22:27.323047       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1102 13:22:27.323054       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1102 13:22:27.413215       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1102 13:22:27.413510       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 13:22:27.416038       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1102 13:22:27.421229       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1102 13:22:27.421298       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 13:22:27.427800       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 13:22:27.528284       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 02 13:30:24 functional-082350 kubelet[3907]: E1102 13:30:24.179238    3907 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-tn9zm" podUID="ac9a69d9-7322-42db-8ba8-05fb4ae7a680"
	Nov 02 13:30:26 functional-082350 kubelet[3907]: E1102 13:30:26.178998    3907 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-skmb2" podUID="33cb6995-bc92-4922-b9f4-f4ca9f69abca"
	Nov 02 13:30:35 functional-082350 kubelet[3907]: E1102 13:30:35.178936    3907 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-tn9zm" podUID="ac9a69d9-7322-42db-8ba8-05fb4ae7a680"
	Nov 02 13:30:40 functional-082350 kubelet[3907]: E1102 13:30:40.179037    3907 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-skmb2" podUID="33cb6995-bc92-4922-b9f4-f4ca9f69abca"
	Nov 02 13:30:46 functional-082350 kubelet[3907]: E1102 13:30:46.179482    3907 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-tn9zm" podUID="ac9a69d9-7322-42db-8ba8-05fb4ae7a680"
	Nov 02 13:30:53 functional-082350 kubelet[3907]: E1102 13:30:53.178997    3907 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-skmb2" podUID="33cb6995-bc92-4922-b9f4-f4ca9f69abca"
	Nov 02 13:30:58 functional-082350 kubelet[3907]: E1102 13:30:58.179529    3907 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-tn9zm" podUID="ac9a69d9-7322-42db-8ba8-05fb4ae7a680"
	Nov 02 13:31:08 functional-082350 kubelet[3907]: E1102 13:31:08.179094    3907 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-skmb2" podUID="33cb6995-bc92-4922-b9f4-f4ca9f69abca"
	Nov 02 13:31:12 functional-082350 kubelet[3907]: E1102 13:31:12.179469    3907 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-tn9zm" podUID="ac9a69d9-7322-42db-8ba8-05fb4ae7a680"
	Nov 02 13:31:22 functional-082350 kubelet[3907]: E1102 13:31:22.180006    3907 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-skmb2" podUID="33cb6995-bc92-4922-b9f4-f4ca9f69abca"
	Nov 02 13:31:27 functional-082350 kubelet[3907]: E1102 13:31:27.179268    3907 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-tn9zm" podUID="ac9a69d9-7322-42db-8ba8-05fb4ae7a680"
	Nov 02 13:31:36 functional-082350 kubelet[3907]: E1102 13:31:36.178893    3907 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-skmb2" podUID="33cb6995-bc92-4922-b9f4-f4ca9f69abca"
	Nov 02 13:31:38 functional-082350 kubelet[3907]: E1102 13:31:38.179505    3907 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-tn9zm" podUID="ac9a69d9-7322-42db-8ba8-05fb4ae7a680"
	Nov 02 13:31:49 functional-082350 kubelet[3907]: E1102 13:31:49.179220    3907 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-skmb2" podUID="33cb6995-bc92-4922-b9f4-f4ca9f69abca"
	Nov 02 13:31:49 functional-082350 kubelet[3907]: E1102 13:31:49.179351    3907 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-tn9zm" podUID="ac9a69d9-7322-42db-8ba8-05fb4ae7a680"
	Nov 02 13:32:00 functional-082350 kubelet[3907]: E1102 13:32:00.178990    3907 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-skmb2" podUID="33cb6995-bc92-4922-b9f4-f4ca9f69abca"
	Nov 02 13:32:03 functional-082350 kubelet[3907]: E1102 13:32:03.178833    3907 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-tn9zm" podUID="ac9a69d9-7322-42db-8ba8-05fb4ae7a680"
	Nov 02 13:32:15 functional-082350 kubelet[3907]: E1102 13:32:15.179243    3907 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-skmb2" podUID="33cb6995-bc92-4922-b9f4-f4ca9f69abca"
	Nov 02 13:32:18 functional-082350 kubelet[3907]: E1102 13:32:18.179402    3907 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-tn9zm" podUID="ac9a69d9-7322-42db-8ba8-05fb4ae7a680"
	Nov 02 13:32:26 functional-082350 kubelet[3907]: E1102 13:32:26.179662    3907 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-skmb2" podUID="33cb6995-bc92-4922-b9f4-f4ca9f69abca"
	Nov 02 13:32:30 functional-082350 kubelet[3907]: E1102 13:32:30.179302    3907 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-tn9zm" podUID="ac9a69d9-7322-42db-8ba8-05fb4ae7a680"
	Nov 02 13:32:40 functional-082350 kubelet[3907]: E1102 13:32:40.179381    3907 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-skmb2" podUID="33cb6995-bc92-4922-b9f4-f4ca9f69abca"
	Nov 02 13:32:43 functional-082350 kubelet[3907]: E1102 13:32:43.179287    3907 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-tn9zm" podUID="ac9a69d9-7322-42db-8ba8-05fb4ae7a680"
	Nov 02 13:32:51 functional-082350 kubelet[3907]: E1102 13:32:51.178935    3907 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-skmb2" podUID="33cb6995-bc92-4922-b9f4-f4ca9f69abca"
	Nov 02 13:32:58 functional-082350 kubelet[3907]: E1102 13:32:58.178851    3907 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-tn9zm" podUID="ac9a69d9-7322-42db-8ba8-05fb4ae7a680"
	
	
	==> storage-provisioner [1cb292b9f03ccb2f7992c5eeed7c35c15630a61b98ec0439bd9334a17126826a] <==
	W1102 13:32:36.902141       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:32:38.906017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:32:38.913823       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:32:40.917170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:32:40.921963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:32:42.925244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:32:42.932059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:32:44.934864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:32:44.939378       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:32:46.942381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:32:46.946736       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:32:48.949706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:32:48.954334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:32:50.957044       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:32:50.964187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:32:52.968087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:32:52.972958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:32:54.975591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:32:54.980018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:32:56.982511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:32:56.989322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:32:58.992412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:32:58.996916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:33:01.001479       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:33:01.007819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e103ed229eb2be9f3df0c654e76dbf7f77ce98393a08ad90a6c3289555fc77c2] <==
	I1102 13:21:41.813587       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1102 13:21:45.701975       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1102 13:21:45.702099       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1102 13:21:45.723104       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:21:49.190656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:21:53.450890       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:21:57.049822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:22:00.135566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:22:03.158459       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:22:03.166240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1102 13:22:03.166385       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1102 13:22:03.166572       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-082350_37718a02-cfd4-4176-96ed-c6b6084857c4!
	I1102 13:22:03.166827       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"21474f69-4d4a-4898-9818-be16d991c255", APIVersion:"v1", ResourceVersion:"559", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-082350_37718a02-cfd4-4176-96ed-c6b6084857c4 became leader
	W1102 13:22:03.178323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:22:03.187101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1102 13:22:03.267626       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-082350_37718a02-cfd4-4176-96ed-c6b6084857c4!
	W1102 13:22:05.190334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:22:05.195853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:22:07.199368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:22:07.203904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:22:09.207318       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 13:22:09.218939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-082350 -n functional-082350
helpers_test.go:269: (dbg) Run:  kubectl --context functional-082350 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-node-75c85bcc94-tn9zm hello-node-connect-7d85dfc575-skmb2
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-082350 describe pod hello-node-75c85bcc94-tn9zm hello-node-connect-7d85dfc575-skmb2
helpers_test.go:290: (dbg) kubectl --context functional-082350 describe pod hello-node-75c85bcc94-tn9zm hello-node-connect-7d85dfc575-skmb2:

                                                
                                                
-- stdout --
	Name:             hello-node-75c85bcc94-tn9zm
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-082350/192.168.49.2
	Start Time:       Sun, 02 Nov 2025 13:23:14 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tcndt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tcndt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m48s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-tn9zm to functional-082350
	  Normal   Pulling    7m1s (x5 over 9m48s)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m1s (x5 over 9m48s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m1s (x5 over 9m48s)    kubelet            Error: ErrImagePull
	  Normal   BackOff    4m35s (x21 over 9m48s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m35s (x21 over 9m48s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-skmb2
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-082350/192.168.49.2
	Start Time:       Sun, 02 Nov 2025 13:22:59 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hd7jr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hd7jr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-skmb2 to functional-082350
	  Normal   Pulling    7m9s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m9s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m9s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Normal   BackOff    4m54s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m54s (x21 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.80s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-082350 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-082350 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-tn9zm" [ac9a69d9-7322-42db-8ba8-05fb4ae7a680] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1102 13:23:43.375869  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:25:59.510673  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:26:27.218004  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:30:59.510675  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-082350 -n functional-082350
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-02 13:33:15.531456639 +0000 UTC m=+1228.800074655
functional_test.go:1460: (dbg) Run:  kubectl --context functional-082350 describe po hello-node-75c85bcc94-tn9zm -n default
functional_test.go:1460: (dbg) kubectl --context functional-082350 describe po hello-node-75c85bcc94-tn9zm -n default:
Name:             hello-node-75c85bcc94-tn9zm
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-082350/192.168.49.2
Start Time:       Sun, 02 Nov 2025 13:23:14 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tcndt (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-tcndt:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-tn9zm to functional-082350
Normal   Pulling    7m13s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m13s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m13s (x5 over 10m)   kubelet            Error: ErrImagePull
Normal   BackOff    4m47s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m47s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-082350 logs hello-node-75c85bcc94-tn9zm -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-082350 logs hello-node-75c85bcc94-tn9zm -n default: exit status 1 (126.080018ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-tn9zm" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-082350 logs hello-node-75c85bcc94-tn9zm -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-082350 service --namespace=default --https --url hello-node: exit status 115 (526.590081ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30209
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-082350 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-082350 service hello-node --url --format={{.IP}}: exit status 115 (474.196702ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-082350 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-082350 service hello-node --url: exit status 115 (533.239722ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30209
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-082350 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30209
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 image load --daemon kicbase/echo-server:functional-082350 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-082350 image load --daemon kicbase/echo-server:functional-082350 --alsologtostderr: (2.057786873s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-082350" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 image load --daemon kicbase/echo-server:functional-082350 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-082350" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-082350
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 image load --daemon kicbase/echo-server:functional-082350 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-082350" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 image save kicbase/echo-server:functional-082350 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1102 13:33:29.855044  322786 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:33:29.855197  322786 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:33:29.855208  322786 out.go:374] Setting ErrFile to fd 2...
	I1102 13:33:29.855213  322786 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:33:29.855474  322786 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 13:33:29.856114  322786 config.go:182] Loaded profile config "functional-082350": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:33:29.856231  322786 config.go:182] Loaded profile config "functional-082350": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:33:29.856676  322786 cli_runner.go:164] Run: docker container inspect functional-082350 --format={{.State.Status}}
	I1102 13:33:29.875415  322786 ssh_runner.go:195] Run: systemctl --version
	I1102 13:33:29.875478  322786 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-082350
	I1102 13:33:29.894244  322786 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/functional-082350/id_rsa Username:docker}
	I1102 13:33:30.005624  322786 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar
	W1102 13:33:30.005708  322786 cache_images.go:255] Failed to load cached images for "functional-082350": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar: no such file or directory
	I1102 13:33:30.005730  322786 cache_images.go:267] failed pushing to: functional-082350

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-082350
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 image save --daemon kicbase/echo-server:functional-082350 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-082350
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-082350: exit status 1 (16.079585ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-082350

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-082350

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.38s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-430911 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-430911 --output=json --user=testUser: exit status 80 (2.379235075s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c6706c45-9248-4ce1-b1a9-fd882182fc87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-430911 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"943bcde4-9a73-45d3-a5e2-98e770691cce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-02T13:46:14Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"b0f87c8f-b3f1-4b92-a6ad-3187427d8d1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-430911 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.38s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.69s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-430911 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-430911 --output=json --user=testUser: exit status 80 (1.689037721s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1a0296fc-2753-4d81-93fe-842089dd78bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-430911 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"2a44093d-12da-496c-9e1e-4f11f4e2486e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-02T13:46:15Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"9b1f4951-8f44-4e03-b46a-a70f773048c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-430911 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.69s)

                                                
                                    
x
+
TestScheduledStopUnix (40.99s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-398568 --memory=3072 --driver=docker  --container-runtime=crio
E1102 14:00:59.514841  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-398568 --memory=3072 --driver=docker  --container-runtime=crio: (35.881300334s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-398568 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-398568 -n scheduled-stop-398568
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-398568 --schedule 15s
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:98: process 424068 running but should have been killed on reschedule of stop
panic.go:636: *** TestScheduledStopUnix FAILED at 2025-11-02 14:01:15.586336421 +0000 UTC m=+2908.854954494
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestScheduledStopUnix]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestScheduledStopUnix]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect scheduled-stop-398568
helpers_test.go:243: (dbg) docker inspect scheduled-stop-398568:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8099820240f43e7319553e57e21721a62b25fe0aeb907985033dc0330a4cc856",
	        "Created": "2025-11-02T14:00:44.643984062Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 422227,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-02T14:00:44.711507577Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/8099820240f43e7319553e57e21721a62b25fe0aeb907985033dc0330a4cc856/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8099820240f43e7319553e57e21721a62b25fe0aeb907985033dc0330a4cc856/hostname",
	        "HostsPath": "/var/lib/docker/containers/8099820240f43e7319553e57e21721a62b25fe0aeb907985033dc0330a4cc856/hosts",
	        "LogPath": "/var/lib/docker/containers/8099820240f43e7319553e57e21721a62b25fe0aeb907985033dc0330a4cc856/8099820240f43e7319553e57e21721a62b25fe0aeb907985033dc0330a4cc856-json.log",
	        "Name": "/scheduled-stop-398568",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "scheduled-stop-398568:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "scheduled-stop-398568",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8099820240f43e7319553e57e21721a62b25fe0aeb907985033dc0330a4cc856",
	                "LowerDir": "/var/lib/docker/overlay2/80f9779310946e6e570cf34f887411709aed910730b05f3618bf6ca15eac6d73-init/diff:/var/lib/docker/overlay2/53058afe6acd7391f639f551e07f446f6a39a991b699aea18507cb21a1652e97/diff",
	                "MergedDir": "/var/lib/docker/overlay2/80f9779310946e6e570cf34f887411709aed910730b05f3618bf6ca15eac6d73/merged",
	                "UpperDir": "/var/lib/docker/overlay2/80f9779310946e6e570cf34f887411709aed910730b05f3618bf6ca15eac6d73/diff",
	                "WorkDir": "/var/lib/docker/overlay2/80f9779310946e6e570cf34f887411709aed910730b05f3618bf6ca15eac6d73/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "scheduled-stop-398568",
	                "Source": "/var/lib/docker/volumes/scheduled-stop-398568/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "scheduled-stop-398568",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "scheduled-stop-398568",
	                "name.minikube.sigs.k8s.io": "scheduled-stop-398568",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2875b0a0bff92a5be92beb0c3a67932b7dd50806755afeb9d7bd088124216b89",
	            "SandboxKey": "/var/run/docker/netns/2875b0a0bff9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33333"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33334"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33337"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33335"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33336"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "scheduled-stop-398568": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:91:d3:96:af:0f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cc9a74036298038e0f5817aef77dbe129afc7787ea6429c1ab07b05f79065faa",
	                    "EndpointID": "35b416f55cc8c44ed912b29a5e26ae97ebd7b90b572f9f032a2f6c005a6fbcfd",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "scheduled-stop-398568",
	                        "8099820240f4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-398568 -n scheduled-stop-398568
helpers_test.go:252: <<< TestScheduledStopUnix FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestScheduledStopUnix]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p scheduled-stop-398568 logs -n 25
helpers_test.go:260: TestScheduledStopUnix logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                            │        PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p multinode-731545                                                                                                                                       │ multinode-731545      │ jenkins │ v1.37.0 │ 02 Nov 25 13:55 UTC │ 02 Nov 25 13:55 UTC │
	│ start   │ -p multinode-731545 --wait=true -v=5 --alsologtostderr                                                                                                    │ multinode-731545      │ jenkins │ v1.37.0 │ 02 Nov 25 13:55 UTC │ 02 Nov 25 13:56 UTC │
	│ node    │ list -p multinode-731545                                                                                                                                  │ multinode-731545      │ jenkins │ v1.37.0 │ 02 Nov 25 13:56 UTC │                     │
	│ node    │ multinode-731545 node delete m03                                                                                                                          │ multinode-731545      │ jenkins │ v1.37.0 │ 02 Nov 25 13:56 UTC │ 02 Nov 25 13:56 UTC │
	│ stop    │ multinode-731545 stop                                                                                                                                     │ multinode-731545      │ jenkins │ v1.37.0 │ 02 Nov 25 13:56 UTC │ 02 Nov 25 13:56 UTC │
	│ start   │ -p multinode-731545 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio                                                          │ multinode-731545      │ jenkins │ v1.37.0 │ 02 Nov 25 13:56 UTC │ 02 Nov 25 13:57 UTC │
	│ node    │ list -p multinode-731545                                                                                                                                  │ multinode-731545      │ jenkins │ v1.37.0 │ 02 Nov 25 13:57 UTC │                     │
	│ start   │ -p multinode-731545-m02 --driver=docker  --container-runtime=crio                                                                                         │ multinode-731545-m02  │ jenkins │ v1.37.0 │ 02 Nov 25 13:57 UTC │                     │
	│ start   │ -p multinode-731545-m03 --driver=docker  --container-runtime=crio                                                                                         │ multinode-731545-m03  │ jenkins │ v1.37.0 │ 02 Nov 25 13:57 UTC │ 02 Nov 25 13:58 UTC │
	│ node    │ add -p multinode-731545                                                                                                                                   │ multinode-731545      │ jenkins │ v1.37.0 │ 02 Nov 25 13:58 UTC │                     │
	│ delete  │ -p multinode-731545-m03                                                                                                                                   │ multinode-731545-m03  │ jenkins │ v1.37.0 │ 02 Nov 25 13:58 UTC │ 02 Nov 25 13:58 UTC │
	│ delete  │ -p multinode-731545                                                                                                                                       │ multinode-731545      │ jenkins │ v1.37.0 │ 02 Nov 25 13:58 UTC │ 02 Nov 25 13:58 UTC │
	│ start   │ -p test-preload-997485 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0 │ test-preload-997485   │ jenkins │ v1.37.0 │ 02 Nov 25 13:58 UTC │ 02 Nov 25 13:59 UTC │
	│ image   │ test-preload-997485 image pull gcr.io/k8s-minikube/busybox                                                                                                │ test-preload-997485   │ jenkins │ v1.37.0 │ 02 Nov 25 13:59 UTC │ 02 Nov 25 13:59 UTC │
	│ stop    │ -p test-preload-997485                                                                                                                                    │ test-preload-997485   │ jenkins │ v1.37.0 │ 02 Nov 25 13:59 UTC │ 02 Nov 25 13:59 UTC │
	│ start   │ -p test-preload-997485 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio                                         │ test-preload-997485   │ jenkins │ v1.37.0 │ 02 Nov 25 13:59 UTC │ 02 Nov 25 14:00 UTC │
	│ image   │ test-preload-997485 image list                                                                                                                            │ test-preload-997485   │ jenkins │ v1.37.0 │ 02 Nov 25 14:00 UTC │ 02 Nov 25 14:00 UTC │
	│ delete  │ -p test-preload-997485                                                                                                                                    │ test-preload-997485   │ jenkins │ v1.37.0 │ 02 Nov 25 14:00 UTC │ 02 Nov 25 14:00 UTC │
	│ start   │ -p scheduled-stop-398568 --memory=3072 --driver=docker  --container-runtime=crio                                                                          │ scheduled-stop-398568 │ jenkins │ v1.37.0 │ 02 Nov 25 14:00 UTC │ 02 Nov 25 14:01 UTC │
	│ stop    │ -p scheduled-stop-398568 --schedule 5m                                                                                                                    │ scheduled-stop-398568 │ jenkins │ v1.37.0 │ 02 Nov 25 14:01 UTC │                     │
	│ stop    │ -p scheduled-stop-398568 --schedule 5m                                                                                                                    │ scheduled-stop-398568 │ jenkins │ v1.37.0 │ 02 Nov 25 14:01 UTC │                     │
	│ stop    │ -p scheduled-stop-398568 --schedule 5m                                                                                                                    │ scheduled-stop-398568 │ jenkins │ v1.37.0 │ 02 Nov 25 14:01 UTC │                     │
	│ stop    │ -p scheduled-stop-398568 --schedule 15s                                                                                                                   │ scheduled-stop-398568 │ jenkins │ v1.37.0 │ 02 Nov 25 14:01 UTC │                     │
	│ stop    │ -p scheduled-stop-398568 --schedule 15s                                                                                                                   │ scheduled-stop-398568 │ jenkins │ v1.37.0 │ 02 Nov 25 14:01 UTC │                     │
	│ stop    │ -p scheduled-stop-398568 --schedule 15s                                                                                                                   │ scheduled-stop-398568 │ jenkins │ v1.37.0 │ 02 Nov 25 14:01 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 14:00:39
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 14:00:39.223096  421834 out.go:360] Setting OutFile to fd 1 ...
	I1102 14:00:39.223202  421834 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:00:39.223206  421834 out.go:374] Setting ErrFile to fd 2...
	I1102 14:00:39.223210  421834 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:00:39.223512  421834 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 14:00:39.223945  421834 out.go:368] Setting JSON to false
	I1102 14:00:39.224783  421834 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":9792,"bootTime":1762082248,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1102 14:00:39.224841  421834 start.go:143] virtualization:  
	I1102 14:00:39.229012  421834 out.go:179] * [scheduled-stop-398568] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1102 14:00:39.233832  421834 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 14:00:39.233975  421834 notify.go:221] Checking for updates...
	I1102 14:00:39.240831  421834 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 14:00:39.244087  421834 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-293314/kubeconfig
	I1102 14:00:39.247262  421834 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-293314/.minikube
	I1102 14:00:39.250402  421834 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1102 14:00:39.253579  421834 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 14:00:39.256938  421834 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 14:00:39.283797  421834 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1102 14:00:39.283898  421834 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:00:39.340335  421834 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-02 14:00:39.331258753 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:00:39.340439  421834 docker.go:319] overlay module found
	I1102 14:00:39.343615  421834 out.go:179] * Using the docker driver based on user configuration
	I1102 14:00:39.346640  421834 start.go:309] selected driver: docker
	I1102 14:00:39.346651  421834 start.go:930] validating driver "docker" against <nil>
	I1102 14:00:39.346663  421834 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 14:00:39.347385  421834 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:00:39.406128  421834 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-11-02 14:00:39.396684578 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:00:39.406268  421834 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1102 14:00:39.406489  421834 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1102 14:00:39.409688  421834 out.go:179] * Using Docker driver with root privileges
	I1102 14:00:39.412654  421834 cni.go:84] Creating CNI manager for ""
	I1102 14:00:39.412718  421834 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 14:00:39.412726  421834 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1102 14:00:39.412825  421834 start.go:353] cluster config:
	{Name:scheduled-stop-398568 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-398568 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 14:00:39.416046  421834 out.go:179] * Starting "scheduled-stop-398568" primary control-plane node in "scheduled-stop-398568" cluster
	I1102 14:00:39.418854  421834 cache.go:124] Beginning downloading kic base image for docker with crio
	I1102 14:00:39.421840  421834 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1102 14:00:39.424790  421834 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 14:00:39.424849  421834 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1102 14:00:39.424855  421834 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1102 14:00:39.424872  421834 cache.go:59] Caching tarball of preloaded images
	I1102 14:00:39.424969  421834 preload.go:233] Found /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1102 14:00:39.424978  421834 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1102 14:00:39.425367  421834 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/scheduled-stop-398568/config.json ...
	I1102 14:00:39.425398  421834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/scheduled-stop-398568/config.json: {Name:mk10bba71b553f5eeb76f7f0039333e777d052e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:00:39.444295  421834 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1102 14:00:39.444311  421834 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1102 14:00:39.444322  421834 cache.go:233] Successfully downloaded all kic artifacts
	I1102 14:00:39.444344  421834 start.go:360] acquireMachinesLock for scheduled-stop-398568: {Name:mk89c7b78518a874ce00bc6a59944f9067620e95 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 14:00:39.444456  421834 start.go:364] duration metric: took 97.724µs to acquireMachinesLock for "scheduled-stop-398568"
	I1102 14:00:39.444480  421834 start.go:93] Provisioning new machine with config: &{Name:scheduled-stop-398568 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-398568 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 14:00:39.444547  421834 start.go:125] createHost starting for "" (driver="docker")
	I1102 14:00:39.449800  421834 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1102 14:00:39.450045  421834 start.go:159] libmachine.API.Create for "scheduled-stop-398568" (driver="docker")
	I1102 14:00:39.450088  421834 client.go:173] LocalClient.Create starting
	I1102 14:00:39.450184  421834 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem
	I1102 14:00:39.450217  421834 main.go:143] libmachine: Decoding PEM data...
	I1102 14:00:39.450229  421834 main.go:143] libmachine: Parsing certificate...
	I1102 14:00:39.450280  421834 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem
	I1102 14:00:39.450295  421834 main.go:143] libmachine: Decoding PEM data...
	I1102 14:00:39.450304  421834 main.go:143] libmachine: Parsing certificate...
	I1102 14:00:39.450724  421834 cli_runner.go:164] Run: docker network inspect scheduled-stop-398568 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1102 14:00:39.466001  421834 cli_runner.go:211] docker network inspect scheduled-stop-398568 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1102 14:00:39.466066  421834 network_create.go:284] running [docker network inspect scheduled-stop-398568] to gather additional debugging logs...
	I1102 14:00:39.466081  421834 cli_runner.go:164] Run: docker network inspect scheduled-stop-398568
	W1102 14:00:39.481637  421834 cli_runner.go:211] docker network inspect scheduled-stop-398568 returned with exit code 1
	I1102 14:00:39.481656  421834 network_create.go:287] error running [docker network inspect scheduled-stop-398568]: docker network inspect scheduled-stop-398568: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network scheduled-stop-398568 not found
	I1102 14:00:39.481679  421834 network_create.go:289] output of [docker network inspect scheduled-stop-398568]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network scheduled-stop-398568 not found
	
	** /stderr **
	I1102 14:00:39.481768  421834 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 14:00:39.498796  421834 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ddf319108ac9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:66:f7:2d:49:67:ff} reservation:<nil>}
	I1102 14:00:39.499180  421834 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-30b945568040 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:b6:b2:b0:cb:49:d7} reservation:<nil>}
	I1102 14:00:39.499424  421834 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d23a3a2e266d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fa:42:95:8e:ae:52} reservation:<nil>}
	I1102 14:00:39.499887  421834 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c29e0}
	I1102 14:00:39.499915  421834 network_create.go:124] attempt to create docker network scheduled-stop-398568 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1102 14:00:39.499987  421834 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=scheduled-stop-398568 scheduled-stop-398568
	I1102 14:00:39.555921  421834 network_create.go:108] docker network scheduled-stop-398568 192.168.76.0/24 created
	I1102 14:00:39.555944  421834 kic.go:121] calculated static IP "192.168.76.2" for the "scheduled-stop-398568" container
	I1102 14:00:39.556030  421834 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1102 14:00:39.570671  421834 cli_runner.go:164] Run: docker volume create scheduled-stop-398568 --label name.minikube.sigs.k8s.io=scheduled-stop-398568 --label created_by.minikube.sigs.k8s.io=true
	I1102 14:00:39.588797  421834 oci.go:103] Successfully created a docker volume scheduled-stop-398568
	I1102 14:00:39.588897  421834 cli_runner.go:164] Run: docker run --rm --name scheduled-stop-398568-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-398568 --entrypoint /usr/bin/test -v scheduled-stop-398568:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1102 14:00:40.109791  421834 oci.go:107] Successfully prepared a docker volume scheduled-stop-398568
	I1102 14:00:40.109843  421834 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 14:00:40.109863  421834 kic.go:194] Starting extracting preloaded images to volume ...
	I1102 14:00:40.109933  421834 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-398568:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1102 14:00:44.576045  421834 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v scheduled-stop-398568:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.466076863s)
	I1102 14:00:44.576068  421834 kic.go:203] duration metric: took 4.466201377s to extract preloaded images to volume ...
	W1102 14:00:44.576210  421834 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1102 14:00:44.576319  421834 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1102 14:00:44.629571  421834 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname scheduled-stop-398568 --name scheduled-stop-398568 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=scheduled-stop-398568 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=scheduled-stop-398568 --network scheduled-stop-398568 --ip 192.168.76.2 --volume scheduled-stop-398568:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1102 14:00:44.933656  421834 cli_runner.go:164] Run: docker container inspect scheduled-stop-398568 --format={{.State.Running}}
	I1102 14:00:44.952479  421834 cli_runner.go:164] Run: docker container inspect scheduled-stop-398568 --format={{.State.Status}}
	I1102 14:00:44.979049  421834 cli_runner.go:164] Run: docker exec scheduled-stop-398568 stat /var/lib/dpkg/alternatives/iptables
	I1102 14:00:45.078871  421834 oci.go:144] the created container "scheduled-stop-398568" has a running status.
	I1102 14:00:45.078893  421834 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21808-293314/.minikube/machines/scheduled-stop-398568/id_rsa...
	I1102 14:00:45.381036  421834 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21808-293314/.minikube/machines/scheduled-stop-398568/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1102 14:00:45.424914  421834 cli_runner.go:164] Run: docker container inspect scheduled-stop-398568 --format={{.State.Status}}
	I1102 14:00:45.456660  421834 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1102 14:00:45.456672  421834 kic_runner.go:114] Args: [docker exec --privileged scheduled-stop-398568 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1102 14:00:45.527300  421834 cli_runner.go:164] Run: docker container inspect scheduled-stop-398568 --format={{.State.Status}}
	I1102 14:00:45.557009  421834 machine.go:94] provisionDockerMachine start ...
	I1102 14:00:45.557115  421834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-398568
	I1102 14:00:45.584359  421834 main.go:143] libmachine: Using SSH client type: native
	I1102 14:00:45.584696  421834 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33333 <nil> <nil>}
	I1102 14:00:45.584704  421834 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 14:00:45.585349  421834 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:32986->127.0.0.1:33333: read: connection reset by peer
	I1102 14:00:48.738321  421834 main.go:143] libmachine: SSH cmd err, output: <nil>: scheduled-stop-398568
	
	I1102 14:00:48.738335  421834 ubuntu.go:182] provisioning hostname "scheduled-stop-398568"
	I1102 14:00:48.738400  421834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-398568
	I1102 14:00:48.755014  421834 main.go:143] libmachine: Using SSH client type: native
	I1102 14:00:48.755371  421834 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33333 <nil> <nil>}
	I1102 14:00:48.755381  421834 main.go:143] libmachine: About to run SSH command:
	sudo hostname scheduled-stop-398568 && echo "scheduled-stop-398568" | sudo tee /etc/hostname
	I1102 14:00:48.920813  421834 main.go:143] libmachine: SSH cmd err, output: <nil>: scheduled-stop-398568
	
	I1102 14:00:48.920893  421834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-398568
	I1102 14:00:48.938913  421834 main.go:143] libmachine: Using SSH client type: native
	I1102 14:00:48.939210  421834 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33333 <nil> <nil>}
	I1102 14:00:48.939224  421834 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sscheduled-stop-398568' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 scheduled-stop-398568/g' /etc/hosts;
				else 
					echo '127.0.1.1 scheduled-stop-398568' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 14:00:49.086817  421834 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 14:00:49.086835  421834 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-293314/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-293314/.minikube}
	I1102 14:00:49.086862  421834 ubuntu.go:190] setting up certificates
	I1102 14:00:49.086870  421834 provision.go:84] configureAuth start
	I1102 14:00:49.086936  421834 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-398568
	I1102 14:00:49.103835  421834 provision.go:143] copyHostCerts
	I1102 14:00:49.103893  421834 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem, removing ...
	I1102 14:00:49.103901  421834 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem
	I1102 14:00:49.103974  421834 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem (1082 bytes)
	I1102 14:00:49.104069  421834 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem, removing ...
	I1102 14:00:49.104073  421834 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem
	I1102 14:00:49.104097  421834 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem (1123 bytes)
	I1102 14:00:49.104155  421834 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem, removing ...
	I1102 14:00:49.104162  421834 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem
	I1102 14:00:49.104185  421834 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem (1675 bytes)
	I1102 14:00:49.104241  421834 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem org=jenkins.scheduled-stop-398568 san=[127.0.0.1 192.168.76.2 localhost minikube scheduled-stop-398568]
	I1102 14:00:49.957691  421834 provision.go:177] copyRemoteCerts
	I1102 14:00:49.957749  421834 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 14:00:49.957790  421834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-398568
	I1102 14:00:49.974176  421834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33333 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/scheduled-stop-398568/id_rsa Username:docker}
	I1102 14:00:50.078791  421834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1102 14:00:50.098571  421834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1102 14:00:50.118032  421834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1102 14:00:50.136695  421834 provision.go:87] duration metric: took 1.049803233s to configureAuth
	I1102 14:00:50.136714  421834 ubuntu.go:206] setting minikube options for container-runtime
	I1102 14:00:50.136898  421834 config.go:182] Loaded profile config "scheduled-stop-398568": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:00:50.137001  421834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-398568
	I1102 14:00:50.153978  421834 main.go:143] libmachine: Using SSH client type: native
	I1102 14:00:50.154286  421834 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33333 <nil> <nil>}
	I1102 14:00:50.154300  421834 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1102 14:00:50.408310  421834 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1102 14:00:50.408323  421834 machine.go:97] duration metric: took 4.851302972s to provisionDockerMachine
	I1102 14:00:50.408331  421834 client.go:176] duration metric: took 10.958238094s to LocalClient.Create
	I1102 14:00:50.408353  421834 start.go:167] duration metric: took 10.958304663s to libmachine.API.Create "scheduled-stop-398568"
	I1102 14:00:50.408360  421834 start.go:293] postStartSetup for "scheduled-stop-398568" (driver="docker")
	I1102 14:00:50.408369  421834 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1102 14:00:50.408447  421834 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1102 14:00:50.408485  421834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-398568
	I1102 14:00:50.425617  421834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33333 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/scheduled-stop-398568/id_rsa Username:docker}
	I1102 14:00:50.530669  421834 ssh_runner.go:195] Run: cat /etc/os-release
	I1102 14:00:50.533967  421834 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1102 14:00:50.533986  421834 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1102 14:00:50.533997  421834 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-293314/.minikube/addons for local assets ...
	I1102 14:00:50.534054  421834 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-293314/.minikube/files for local assets ...
	I1102 14:00:50.534131  421834 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem -> 2951742.pem in /etc/ssl/certs
	I1102 14:00:50.534231  421834 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1102 14:00:50.541813  421834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem --> /etc/ssl/certs/2951742.pem (1708 bytes)
	I1102 14:00:50.559469  421834 start.go:296] duration metric: took 151.095363ms for postStartSetup
	I1102 14:00:50.559835  421834 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-398568
	I1102 14:00:50.576220  421834 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/scheduled-stop-398568/config.json ...
	I1102 14:00:50.576517  421834 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 14:00:50.576563  421834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-398568
	I1102 14:00:50.592686  421834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33333 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/scheduled-stop-398568/id_rsa Username:docker}
	I1102 14:00:50.691441  421834 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1102 14:00:50.696176  421834 start.go:128] duration metric: took 11.251615016s to createHost
	I1102 14:00:50.696191  421834 start.go:83] releasing machines lock for "scheduled-stop-398568", held for 11.25172783s
	I1102 14:00:50.696263  421834 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" scheduled-stop-398568
	I1102 14:00:50.713153  421834 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem (1338 bytes)
	W1102 14:00:50.713197  421834 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174_empty.pem, impossibly tiny 0 bytes
	I1102 14:00:50.713205  421834 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem (1679 bytes)
	I1102 14:00:50.713227  421834 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem (1082 bytes)
	I1102 14:00:50.713251  421834 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem (1123 bytes)
	I1102 14:00:50.713271  421834 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem (1675 bytes)
	I1102 14:00:50.713311  421834 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem (1708 bytes)
	I1102 14:00:50.713370  421834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem --> /usr/share/ca-certificates/295174.pem (1338 bytes)
	I1102 14:00:50.713424  421834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-398568
	I1102 14:00:50.730882  421834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33333 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/scheduled-stop-398568/id_rsa Username:docker}
	I1102 14:00:50.844575  421834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem --> /usr/share/ca-certificates/2951742.pem (1708 bytes)
	I1102 14:00:50.862301  421834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 14:00:50.880177  421834 ssh_runner.go:195] Run: openssl version
	I1102 14:00:50.886266  421834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295174.pem && ln -fs /usr/share/ca-certificates/295174.pem /etc/ssl/certs/295174.pem"
	I1102 14:00:50.894600  421834 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295174.pem
	I1102 14:00:50.898296  421834 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 13:20 /usr/share/ca-certificates/295174.pem
	I1102 14:00:50.898359  421834 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295174.pem
	I1102 14:00:50.939643  421834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295174.pem /etc/ssl/certs/51391683.0"
	I1102 14:00:50.947853  421834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951742.pem && ln -fs /usr/share/ca-certificates/2951742.pem /etc/ssl/certs/2951742.pem"
	I1102 14:00:50.956148  421834 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951742.pem
	I1102 14:00:50.959794  421834 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 13:20 /usr/share/ca-certificates/2951742.pem
	I1102 14:00:50.959849  421834 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951742.pem
	I1102 14:00:51.000881  421834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951742.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 14:00:51.010354  421834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 14:00:51.020403  421834 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:00:51.024691  421834 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 13:13 /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:00:51.024753  421834 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:00:51.066107  421834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 14:00:51.074972  421834 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1102 14:00:51.078434  421834 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1102 14:00:51.081913  421834 ssh_runner.go:195] Run: cat /version.json
	I1102 14:00:51.081985  421834 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1102 14:00:51.176843  421834 ssh_runner.go:195] Run: systemctl --version
	I1102 14:00:51.183511  421834 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1102 14:00:51.220277  421834 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1102 14:00:51.224678  421834 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1102 14:00:51.224743  421834 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1102 14:00:51.253438  421834 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1102 14:00:51.253459  421834 start.go:496] detecting cgroup driver to use...
	I1102 14:00:51.253490  421834 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1102 14:00:51.253553  421834 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1102 14:00:51.270170  421834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1102 14:00:51.283043  421834 docker.go:218] disabling cri-docker service (if available) ...
	I1102 14:00:51.283097  421834 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1102 14:00:51.300677  421834 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1102 14:00:51.319377  421834 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1102 14:00:51.454119  421834 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1102 14:00:51.585376  421834 docker.go:234] disabling docker service ...
	I1102 14:00:51.585435  421834 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1102 14:00:51.609684  421834 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1102 14:00:51.622695  421834 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1102 14:00:51.735916  421834 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1102 14:00:51.859433  421834 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1102 14:00:51.873235  421834 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1102 14:00:51.887541  421834 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1102 14:00:51.887608  421834 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:00:51.896338  421834 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1102 14:00:51.896408  421834 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:00:51.905158  421834 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:00:51.913970  421834 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:00:51.923190  421834 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1102 14:00:51.931337  421834 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:00:51.940029  421834 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:00:51.952918  421834 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:00:51.962346  421834 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1102 14:00:51.969756  421834 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1102 14:00:51.977067  421834 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 14:00:52.095616  421834 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1102 14:00:52.213969  421834 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1102 14:00:52.214031  421834 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1102 14:00:52.218058  421834 start.go:564] Will wait 60s for crictl version
	I1102 14:00:52.218112  421834 ssh_runner.go:195] Run: which crictl
	I1102 14:00:52.221599  421834 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1102 14:00:52.248799  421834 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1102 14:00:52.248891  421834 ssh_runner.go:195] Run: crio --version
	I1102 14:00:52.276394  421834 ssh_runner.go:195] Run: crio --version
	I1102 14:00:52.308886  421834 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1102 14:00:52.311818  421834 cli_runner.go:164] Run: docker network inspect scheduled-stop-398568 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 14:00:52.327967  421834 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1102 14:00:52.332145  421834 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 14:00:52.342013  421834 kubeadm.go:884] updating cluster {Name:scheduled-stop-398568 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-398568 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1102 14:00:52.342120  421834 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 14:00:52.342176  421834 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 14:00:52.379069  421834 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 14:00:52.379080  421834 crio.go:433] Images already preloaded, skipping extraction
	I1102 14:00:52.379138  421834 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 14:00:52.407897  421834 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 14:00:52.407910  421834 cache_images.go:86] Images are preloaded, skipping loading
	I1102 14:00:52.407916  421834 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1102 14:00:52.407997  421834 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=scheduled-stop-398568 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-398568 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1102 14:00:52.408079  421834 ssh_runner.go:195] Run: crio config
	I1102 14:00:52.468812  421834 cni.go:84] Creating CNI manager for ""
	I1102 14:00:52.468824  421834 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 14:00:52.468833  421834 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1102 14:00:52.468855  421834 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:scheduled-stop-398568 NodeName:scheduled-stop-398568 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1102 14:00:52.468977  421834 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "scheduled-stop-398568"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1102 14:00:52.469046  421834 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1102 14:00:52.476909  421834 binaries.go:44] Found k8s binaries, skipping transfer
	I1102 14:00:52.476972  421834 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1102 14:00:52.484730  421834 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I1102 14:00:52.498142  421834 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1102 14:00:52.511254  421834 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1102 14:00:52.524228  421834 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1102 14:00:52.527738  421834 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 14:00:52.537699  421834 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 14:00:52.652936  421834 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 14:00:52.669332  421834 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/scheduled-stop-398568 for IP: 192.168.76.2
	I1102 14:00:52.669342  421834 certs.go:195] generating shared ca certs ...
	I1102 14:00:52.669359  421834 certs.go:227] acquiring lock for ca certs: {Name:mkead50075949a3cdc798f9c0149a2bc2638cbbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:00:52.669538  421834 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-293314/.minikube/ca.key
	I1102 14:00:52.669586  421834 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.key
	I1102 14:00:52.669593  421834 certs.go:257] generating profile certs ...
	I1102 14:00:52.669649  421834 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/scheduled-stop-398568/client.key
	I1102 14:00:52.669658  421834 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/scheduled-stop-398568/client.crt with IP's: []
	I1102 14:00:53.152262  421834 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/scheduled-stop-398568/client.crt ...
	I1102 14:00:53.152278  421834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/scheduled-stop-398568/client.crt: {Name:mk0f9cd2e4ed83f27fd3dc00e469d5c1c718a692 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:00:53.152497  421834 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/scheduled-stop-398568/client.key ...
	I1102 14:00:53.152505  421834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/scheduled-stop-398568/client.key: {Name:mkafa4293fffd6a9e8bcb178eea72c00cb4c6402 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:00:53.152603  421834 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/scheduled-stop-398568/apiserver.key.ef312222
	I1102 14:00:53.152615  421834 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/scheduled-stop-398568/apiserver.crt.ef312222 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1102 14:00:54.767866  421834 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/scheduled-stop-398568/apiserver.crt.ef312222 ...
	I1102 14:00:54.767884  421834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/scheduled-stop-398568/apiserver.crt.ef312222: {Name:mk355ba95af3c90a232f7674d6297d48814fe100 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:00:54.768083  421834 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/scheduled-stop-398568/apiserver.key.ef312222 ...
	I1102 14:00:54.768090  421834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/scheduled-stop-398568/apiserver.key.ef312222: {Name:mk2a233a678cfe1e70626a94731cbdc5f2c0ca14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:00:54.768190  421834 certs.go:382] copying /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/scheduled-stop-398568/apiserver.crt.ef312222 -> /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/scheduled-stop-398568/apiserver.crt
	I1102 14:00:54.768268  421834 certs.go:386] copying /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/scheduled-stop-398568/apiserver.key.ef312222 -> /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/scheduled-stop-398568/apiserver.key
	I1102 14:00:54.768320  421834 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/scheduled-stop-398568/proxy-client.key
	I1102 14:00:54.768332  421834 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/scheduled-stop-398568/proxy-client.crt with IP's: []
	I1102 14:00:55.338883  421834 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/scheduled-stop-398568/proxy-client.crt ...
	I1102 14:00:55.338898  421834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/scheduled-stop-398568/proxy-client.crt: {Name:mk6334b70d0db139206b538296c11009175f2508 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:00:55.339092  421834 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/scheduled-stop-398568/proxy-client.key ...
	I1102 14:00:55.339100  421834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/scheduled-stop-398568/proxy-client.key: {Name:mk127593e58df8f28fbc250aca204e5b6b9b771a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:00:55.339297  421834 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem (1338 bytes)
	W1102 14:00:55.339330  421834 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174_empty.pem, impossibly tiny 0 bytes
	I1102 14:00:55.339338  421834 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem (1679 bytes)
	I1102 14:00:55.339366  421834 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem (1082 bytes)
	I1102 14:00:55.339391  421834 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem (1123 bytes)
	I1102 14:00:55.339414  421834 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem (1675 bytes)
	I1102 14:00:55.339456  421834 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem (1708 bytes)
	I1102 14:00:55.340037  421834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1102 14:00:55.359365  421834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1102 14:00:55.379537  421834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1102 14:00:55.398752  421834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1102 14:00:55.417604  421834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/scheduled-stop-398568/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1102 14:00:55.436182  421834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/scheduled-stop-398568/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1102 14:00:55.454838  421834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/scheduled-stop-398568/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1102 14:00:55.473497  421834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/scheduled-stop-398568/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1102 14:00:55.491935  421834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem --> /usr/share/ca-certificates/2951742.pem (1708 bytes)
	I1102 14:00:55.509974  421834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 14:00:55.528890  421834 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem --> /usr/share/ca-certificates/295174.pem (1338 bytes)
	I1102 14:00:55.547234  421834 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1102 14:00:55.561200  421834 ssh_runner.go:195] Run: openssl version
	I1102 14:00:55.567750  421834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951742.pem && ln -fs /usr/share/ca-certificates/2951742.pem /etc/ssl/certs/2951742.pem"
	I1102 14:00:55.576912  421834 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951742.pem
	I1102 14:00:55.581256  421834 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 13:20 /usr/share/ca-certificates/2951742.pem
	I1102 14:00:55.581315  421834 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951742.pem
	I1102 14:00:55.622676  421834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951742.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 14:00:55.630842  421834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 14:00:55.639474  421834 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:00:55.643423  421834 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 13:13 /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:00:55.643483  421834 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:00:55.685252  421834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 14:00:55.693901  421834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295174.pem && ln -fs /usr/share/ca-certificates/295174.pem /etc/ssl/certs/295174.pem"
	I1102 14:00:55.702577  421834 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295174.pem
	I1102 14:00:55.707104  421834 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 13:20 /usr/share/ca-certificates/295174.pem
	I1102 14:00:55.707160  421834 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295174.pem
	I1102 14:00:55.748692  421834 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295174.pem /etc/ssl/certs/51391683.0"
	I1102 14:00:55.757158  421834 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1102 14:00:55.760919  421834 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1102 14:00:55.760974  421834 kubeadm.go:401] StartCluster: {Name:scheduled-stop-398568 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:scheduled-stop-398568 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 14:00:55.761048  421834 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 14:00:55.761108  421834 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 14:00:55.789244  421834 cri.go:89] found id: ""
	I1102 14:00:55.789317  421834 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1102 14:00:55.797412  421834 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1102 14:00:55.805235  421834 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1102 14:00:55.805290  421834 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1102 14:00:55.813404  421834 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1102 14:00:55.813414  421834 kubeadm.go:158] found existing configuration files:
	
	I1102 14:00:55.813463  421834 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1102 14:00:55.821568  421834 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1102 14:00:55.821647  421834 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1102 14:00:55.829265  421834 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1102 14:00:55.836906  421834 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1102 14:00:55.836970  421834 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1102 14:00:55.844667  421834 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1102 14:00:55.852249  421834 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1102 14:00:55.852302  421834 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1102 14:00:55.859734  421834 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1102 14:00:55.867619  421834 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1102 14:00:55.867680  421834 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1102 14:00:55.875324  421834 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1102 14:00:55.943355  421834 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1102 14:00:55.943587  421834 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1102 14:00:56.009490  421834 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1102 14:01:13.320215  421834 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1102 14:01:13.320276  421834 kubeadm.go:319] [preflight] Running pre-flight checks
	I1102 14:01:13.320369  421834 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1102 14:01:13.320448  421834 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1102 14:01:13.320495  421834 kubeadm.go:319] OS: Linux
	I1102 14:01:13.320545  421834 kubeadm.go:319] CGROUPS_CPU: enabled
	I1102 14:01:13.320597  421834 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1102 14:01:13.320646  421834 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1102 14:01:13.320695  421834 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1102 14:01:13.320759  421834 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1102 14:01:13.320812  421834 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1102 14:01:13.320859  421834 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1102 14:01:13.320909  421834 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1102 14:01:13.320956  421834 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1102 14:01:13.321031  421834 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1102 14:01:13.321129  421834 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1102 14:01:13.321223  421834 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1102 14:01:13.321287  421834 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1102 14:01:13.324211  421834 out.go:252]   - Generating certificates and keys ...
	I1102 14:01:13.324304  421834 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1102 14:01:13.324368  421834 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1102 14:01:13.324436  421834 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1102 14:01:13.324507  421834 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1102 14:01:13.324569  421834 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1102 14:01:13.324620  421834 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1102 14:01:13.324674  421834 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1102 14:01:13.324803  421834 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost scheduled-stop-398568] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1102 14:01:13.324855  421834 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1102 14:01:13.324981  421834 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost scheduled-stop-398568] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1102 14:01:13.325047  421834 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1102 14:01:13.325112  421834 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1102 14:01:13.325166  421834 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1102 14:01:13.325224  421834 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1102 14:01:13.325276  421834 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1102 14:01:13.325333  421834 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1102 14:01:13.325391  421834 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1102 14:01:13.325455  421834 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1102 14:01:13.325511  421834 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1102 14:01:13.325594  421834 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1102 14:01:13.325662  421834 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1102 14:01:13.328632  421834 out.go:252]   - Booting up control plane ...
	I1102 14:01:13.328741  421834 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1102 14:01:13.328825  421834 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1102 14:01:13.328894  421834 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1102 14:01:13.329002  421834 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1102 14:01:13.329102  421834 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1102 14:01:13.329243  421834 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1102 14:01:13.329336  421834 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1102 14:01:13.329420  421834 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1102 14:01:13.329571  421834 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1102 14:01:13.329687  421834 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1102 14:01:13.329761  421834 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001759616s
	I1102 14:01:13.329878  421834 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1102 14:01:13.329965  421834 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1102 14:01:13.330057  421834 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1102 14:01:13.330142  421834 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1102 14:01:13.330219  421834 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.027254722s
	I1102 14:01:13.330305  421834 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 4.267216734s
	I1102 14:01:13.330376  421834 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.001550336s
	I1102 14:01:13.330509  421834 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1102 14:01:13.330716  421834 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1102 14:01:13.330800  421834 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1102 14:01:13.331019  421834 kubeadm.go:319] [mark-control-plane] Marking the node scheduled-stop-398568 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1102 14:01:13.331105  421834 kubeadm.go:319] [bootstrap-token] Using token: xub0di.jsruo7urplr7indx
	I1102 14:01:13.334103  421834 out.go:252]   - Configuring RBAC rules ...
	I1102 14:01:13.334254  421834 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1102 14:01:13.334361  421834 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1102 14:01:13.334521  421834 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1102 14:01:13.334700  421834 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1102 14:01:13.334822  421834 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1102 14:01:13.334911  421834 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1102 14:01:13.335042  421834 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1102 14:01:13.335086  421834 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1102 14:01:13.335133  421834 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1102 14:01:13.335136  421834 kubeadm.go:319] 
	I1102 14:01:13.335198  421834 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1102 14:01:13.335201  421834 kubeadm.go:319] 
	I1102 14:01:13.335288  421834 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1102 14:01:13.335291  421834 kubeadm.go:319] 
	I1102 14:01:13.335317  421834 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1102 14:01:13.335382  421834 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1102 14:01:13.335434  421834 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1102 14:01:13.335437  421834 kubeadm.go:319] 
	I1102 14:01:13.335493  421834 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1102 14:01:13.335496  421834 kubeadm.go:319] 
	I1102 14:01:13.335545  421834 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1102 14:01:13.335548  421834 kubeadm.go:319] 
	I1102 14:01:13.335612  421834 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1102 14:01:13.335689  421834 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1102 14:01:13.335759  421834 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1102 14:01:13.335763  421834 kubeadm.go:319] 
	I1102 14:01:13.335849  421834 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1102 14:01:13.335928  421834 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1102 14:01:13.335937  421834 kubeadm.go:319] 
	I1102 14:01:13.336024  421834 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token xub0di.jsruo7urplr7indx \
	I1102 14:01:13.336137  421834 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:bd4a1f3bddc85f3fc83315ad33165a30aa1cba7ce55898ef9dad8dcc7e8d0eec \
	I1102 14:01:13.336157  421834 kubeadm.go:319] 	--control-plane 
	I1102 14:01:13.336160  421834 kubeadm.go:319] 
	I1102 14:01:13.336248  421834 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1102 14:01:13.336251  421834 kubeadm.go:319] 
	I1102 14:01:13.336336  421834 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token xub0di.jsruo7urplr7indx \
	I1102 14:01:13.336456  421834 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:bd4a1f3bddc85f3fc83315ad33165a30aa1cba7ce55898ef9dad8dcc7e8d0eec 
	I1102 14:01:13.336463  421834 cni.go:84] Creating CNI manager for ""
	I1102 14:01:13.336468  421834 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 14:01:13.339620  421834 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1102 14:01:13.342679  421834 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1102 14:01:13.348105  421834 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1102 14:01:13.348117  421834 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1102 14:01:13.364099  421834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1102 14:01:13.660035  421834 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1102 14:01:13.660177  421834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:01:13.660245  421834 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes scheduled-stop-398568 minikube.k8s.io/updated_at=2025_11_02T14_01_13_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a minikube.k8s.io/name=scheduled-stop-398568 minikube.k8s.io/primary=true
	I1102 14:01:13.810517  421834 ops.go:34] apiserver oom_adj: -16
	I1102 14:01:13.810541  421834 kubeadm.go:1114] duration metric: took 150.417982ms to wait for elevateKubeSystemPrivileges
	I1102 14:01:13.810552  421834 kubeadm.go:403] duration metric: took 18.049583108s to StartCluster
	I1102 14:01:13.810566  421834 settings.go:142] acquiring lock: {Name:mk95f66b3b15e63f58f8c9085c1ffe67cc396dc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:01:13.810650  421834 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-293314/kubeconfig
	I1102 14:01:13.811291  421834 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/kubeconfig: {Name:mke5a65554da8fc0fd6a2ea60bed899d5b38ce09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:01:13.811499  421834 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 14:01:13.811606  421834 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1102 14:01:13.811842  421834 config.go:182] Loaded profile config "scheduled-stop-398568": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:01:13.811832  421834 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1102 14:01:13.811960  421834 addons.go:70] Setting storage-provisioner=true in profile "scheduled-stop-398568"
	I1102 14:01:13.811984  421834 addons.go:239] Setting addon storage-provisioner=true in "scheduled-stop-398568"
	I1102 14:01:13.812015  421834 host.go:66] Checking if "scheduled-stop-398568" exists ...
	I1102 14:01:13.812027  421834 addons.go:70] Setting default-storageclass=true in profile "scheduled-stop-398568"
	I1102 14:01:13.812041  421834 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "scheduled-stop-398568"
	I1102 14:01:13.812388  421834 cli_runner.go:164] Run: docker container inspect scheduled-stop-398568 --format={{.State.Status}}
	I1102 14:01:13.812599  421834 cli_runner.go:164] Run: docker container inspect scheduled-stop-398568 --format={{.State.Status}}
	I1102 14:01:13.816686  421834 out.go:179] * Verifying Kubernetes components...
	I1102 14:01:13.819679  421834 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 14:01:13.861546  421834 addons.go:239] Setting addon default-storageclass=true in "scheduled-stop-398568"
	I1102 14:01:13.861577  421834 host.go:66] Checking if "scheduled-stop-398568" exists ...
	I1102 14:01:13.862004  421834 cli_runner.go:164] Run: docker container inspect scheduled-stop-398568 --format={{.State.Status}}
	I1102 14:01:13.867831  421834 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 14:01:13.871234  421834 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 14:01:13.871245  421834 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1102 14:01:13.871312  421834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-398568
	I1102 14:01:13.907671  421834 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1102 14:01:13.907704  421834 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1102 14:01:13.907800  421834 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" scheduled-stop-398568
	I1102 14:01:13.928576  421834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33333 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/scheduled-stop-398568/id_rsa Username:docker}
	I1102 14:01:13.956826  421834 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33333 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/scheduled-stop-398568/id_rsa Username:docker}
	I1102 14:01:14.096090  421834 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1102 14:01:14.151635  421834 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 14:01:14.239131  421834 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 14:01:14.316202  421834 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1102 14:01:14.458316  421834 start.go:1013] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1102 14:01:14.460084  421834 api_server.go:52] waiting for apiserver process to appear ...
	I1102 14:01:14.460130  421834 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 14:01:14.772310  421834 api_server.go:72] duration metric: took 960.786839ms to wait for apiserver process to appear ...
	I1102 14:01:14.772321  421834 api_server.go:88] waiting for apiserver healthz status ...
	I1102 14:01:14.772337  421834 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1102 14:01:14.784189  421834 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1102 14:01:14.785716  421834 api_server.go:141] control plane version: v1.34.1
	I1102 14:01:14.785730  421834 api_server.go:131] duration metric: took 13.40335ms to wait for apiserver health ...
	I1102 14:01:14.785737  421834 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 14:01:14.792064  421834 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1102 14:01:14.794802  421834 system_pods.go:59] 5 kube-system pods found
	I1102 14:01:14.794824  421834 system_pods.go:61] "etcd-scheduled-stop-398568" [28da39f9-f986-4511-b57e-087d4e71a927] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 14:01:14.794832  421834 system_pods.go:61] "kube-apiserver-scheduled-stop-398568" [23c0f090-57fb-430a-9079-46bdb14469a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 14:01:14.794839  421834 system_pods.go:61] "kube-controller-manager-scheduled-stop-398568" [db7a7f68-0b33-4549-99a5-e5fdcc3bb6c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 14:01:14.794845  421834 system_pods.go:61] "kube-scheduler-scheduled-stop-398568" [38b65e95-3bae-4724-a026-407b314b8f71] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 14:01:14.794850  421834 system_pods.go:61] "storage-provisioner" [ae75f1c8-d05b-4c4d-9b36-804d819f13b7] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1102 14:01:14.794856  421834 system_pods.go:74] duration metric: took 9.113993ms to wait for pod list to return data ...
	I1102 14:01:14.794867  421834 kubeadm.go:587] duration metric: took 983.34799ms to wait for: map[apiserver:true system_pods:true]
	I1102 14:01:14.794879  421834 node_conditions.go:102] verifying NodePressure condition ...
	I1102 14:01:14.794932  421834 addons.go:515] duration metric: took 983.08831ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1102 14:01:14.798384  421834 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1102 14:01:14.798402  421834 node_conditions.go:123] node cpu capacity is 2
	I1102 14:01:14.798414  421834 node_conditions.go:105] duration metric: took 3.530346ms to run NodePressure ...
	I1102 14:01:14.798424  421834 start.go:242] waiting for startup goroutines ...
	I1102 14:01:14.962188  421834 kapi.go:214] "coredns" deployment in "kube-system" namespace and "scheduled-stop-398568" context rescaled to 1 replicas
	I1102 14:01:14.962214  421834 start.go:247] waiting for cluster config update ...
	I1102 14:01:14.962225  421834 start.go:256] writing updated cluster config ...
	I1102 14:01:14.962517  421834 ssh_runner.go:195] Run: rm -f paused
	I1102 14:01:15.033936  421834 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1102 14:01:15.037676  421834 out.go:179] * Done! kubectl is now configured to use "scheduled-stop-398568" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 02 14:01:06 scheduled-stop-398568 crio[873]: time="2025-11-02T14:01:06.169087238Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:01:06 scheduled-stop-398568 crio[873]: time="2025-11-02T14:01:06.169727692Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:01:06 scheduled-stop-398568 crio[873]: time="2025-11-02T14:01:06.170967515Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:01:06 scheduled-stop-398568 crio[873]: time="2025-11-02T14:01:06.171463728Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:01:06 scheduled-stop-398568 crio[873]: time="2025-11-02T14:01:06.175717277Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.1" id=c671a486-0ba7-449c-818b-5a836f570705 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:01:06 scheduled-stop-398568 crio[873]: time="2025-11-02T14:01:06.183158644Z" level=info msg="Creating container: kube-system/kube-scheduler-scheduled-stop-398568/kube-scheduler" id=8fd12a1b-945a-4b7f-9fb9-35da769374d8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:01:06 scheduled-stop-398568 crio[873]: time="2025-11-02T14:01:06.183279934Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:01:06 scheduled-stop-398568 crio[873]: time="2025-11-02T14:01:06.195709615Z" level=info msg="Creating container: kube-system/kube-apiserver-scheduled-stop-398568/kube-apiserver" id=5529952e-4dd2-4c5f-84e3-71cf61604d0c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:01:06 scheduled-stop-398568 crio[873]: time="2025-11-02T14:01:06.195846585Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:01:06 scheduled-stop-398568 crio[873]: time="2025-11-02T14:01:06.202677251Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:01:06 scheduled-stop-398568 crio[873]: time="2025-11-02T14:01:06.206442522Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:01:06 scheduled-stop-398568 crio[873]: time="2025-11-02T14:01:06.211888895Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:01:06 scheduled-stop-398568 crio[873]: time="2025-11-02T14:01:06.212509696Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:01:06 scheduled-stop-398568 crio[873]: time="2025-11-02T14:01:06.221175994Z" level=info msg="Created container 76da88277aa5fdcb196cdd20f3de12d2fc0b5e97fe2535b4d03751fa08264b06: kube-system/kube-controller-manager-scheduled-stop-398568/kube-controller-manager" id=e56bc8a3-1f66-4a85-82c4-52f13e0ee5f7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:01:06 scheduled-stop-398568 crio[873]: time="2025-11-02T14:01:06.223845091Z" level=info msg="Starting container: 76da88277aa5fdcb196cdd20f3de12d2fc0b5e97fe2535b4d03751fa08264b06" id=947b2685-76cf-4a7e-8a35-ca7bf6b9b725 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 14:01:06 scheduled-stop-398568 crio[873]: time="2025-11-02T14:01:06.230107058Z" level=info msg="Created container d0f0236a16b6e9b2b71d037d052a3b2a0d6c951eaa83de67b3ad9a0c01850c4f: kube-system/etcd-scheduled-stop-398568/etcd" id=a1e169a1-7044-4bc1-97e1-9c24b3850bbd name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:01:06 scheduled-stop-398568 crio[873]: time="2025-11-02T14:01:06.232243741Z" level=info msg="Started container" PID=1281 containerID=76da88277aa5fdcb196cdd20f3de12d2fc0b5e97fe2535b4d03751fa08264b06 description=kube-system/kube-controller-manager-scheduled-stop-398568/kube-controller-manager id=947b2685-76cf-4a7e-8a35-ca7bf6b9b725 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8f346a2829318e7b1909a7fb0578ca9c06c75930c83f65e75d793d196cd716ee
	Nov 02 14:01:06 scheduled-stop-398568 crio[873]: time="2025-11-02T14:01:06.233615193Z" level=info msg="Starting container: d0f0236a16b6e9b2b71d037d052a3b2a0d6c951eaa83de67b3ad9a0c01850c4f" id=9a9c26ac-6091-4390-94eb-eaf2d352c766 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 14:01:06 scheduled-stop-398568 crio[873]: time="2025-11-02T14:01:06.240048568Z" level=info msg="Started container" PID=1285 containerID=d0f0236a16b6e9b2b71d037d052a3b2a0d6c951eaa83de67b3ad9a0c01850c4f description=kube-system/etcd-scheduled-stop-398568/etcd id=9a9c26ac-6091-4390-94eb-eaf2d352c766 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5560eca17cec1077dd828355b3bb456d2b88386872cb737dfdfd8c9389180196
	Nov 02 14:01:06 scheduled-stop-398568 crio[873]: time="2025-11-02T14:01:06.246450657Z" level=info msg="Created container 3bcdf4ab02131a6058adba86e0b8cd46c3c719ec91db9e6a4891ef7d7ffb6524: kube-system/kube-scheduler-scheduled-stop-398568/kube-scheduler" id=8fd12a1b-945a-4b7f-9fb9-35da769374d8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:01:06 scheduled-stop-398568 crio[873]: time="2025-11-02T14:01:06.247654614Z" level=info msg="Starting container: 3bcdf4ab02131a6058adba86e0b8cd46c3c719ec91db9e6a4891ef7d7ffb6524" id=ba0ae8a8-1c96-45f0-bbaf-c535dadc69c5 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 14:01:06 scheduled-stop-398568 crio[873]: time="2025-11-02T14:01:06.259839344Z" level=info msg="Started container" PID=1294 containerID=3bcdf4ab02131a6058adba86e0b8cd46c3c719ec91db9e6a4891ef7d7ffb6524 description=kube-system/kube-scheduler-scheduled-stop-398568/kube-scheduler id=ba0ae8a8-1c96-45f0-bbaf-c535dadc69c5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=05fe632fb11f154e831c3aa1258711baaf540cd744f19a0f7aeffa9f7ef4b678
	Nov 02 14:01:06 scheduled-stop-398568 crio[873]: time="2025-11-02T14:01:06.265412094Z" level=info msg="Created container d79f00789e6dbd97b77ef93facb884fafc186506e8de534965ffc60d95cde729: kube-system/kube-apiserver-scheduled-stop-398568/kube-apiserver" id=5529952e-4dd2-4c5f-84e3-71cf61604d0c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:01:06 scheduled-stop-398568 crio[873]: time="2025-11-02T14:01:06.266060949Z" level=info msg="Starting container: d79f00789e6dbd97b77ef93facb884fafc186506e8de534965ffc60d95cde729" id=86dfaec2-c73b-4d7b-b5a4-21b9da3449c7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 14:01:06 scheduled-stop-398568 crio[873]: time="2025-11-02T14:01:06.267925349Z" level=info msg="Started container" PID=1295 containerID=d79f00789e6dbd97b77ef93facb884fafc186506e8de534965ffc60d95cde729 description=kube-system/kube-apiserver-scheduled-stop-398568/kube-apiserver id=86dfaec2-c73b-4d7b-b5a4-21b9da3449c7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a9c98d159a4dc1551b3de8083e3fdf4d5205fd026d618a8a7be73fa67f446691
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                             NAMESPACE
	d79f00789e6db       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   10 seconds ago      Running             kube-apiserver            0                   a9c98d159a4dc       kube-apiserver-scheduled-stop-398568            kube-system
	3bcdf4ab02131       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   10 seconds ago      Running             kube-scheduler            0                   05fe632fb11f1       kube-scheduler-scheduled-stop-398568            kube-system
	d0f0236a16b6e       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   10 seconds ago      Running             etcd                      0                   5560eca17cec1       etcd-scheduled-stop-398568                      kube-system
	76da88277aa5f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   10 seconds ago      Running             kube-controller-manager   0                   8f346a2829318       kube-controller-manager-scheduled-stop-398568   kube-system
	
	
	==> describe nodes <==
	Name:               scheduled-stop-398568
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=scheduled-stop-398568
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a
	                    minikube.k8s.io/name=scheduled-stop-398568
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_02T14_01_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 02 Nov 2025 14:01:10 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  scheduled-stop-398568
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 02 Nov 2025 14:01:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 02 Nov 2025 14:01:12 +0000   Sun, 02 Nov 2025 14:01:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 02 Nov 2025 14:01:12 +0000   Sun, 02 Nov 2025 14:01:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 02 Nov 2025 14:01:12 +0000   Sun, 02 Nov 2025 14:01:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 02 Nov 2025 14:01:12 +0000   Sun, 02 Nov 2025 14:01:06 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    scheduled-stop-398568
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                1f1ecaae-d8bb-430c-987f-41f67737f0ab
	  Boot ID:                    4c303cf4-c0b6-4c0d-aa1a-d7509feb15e7
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                             CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                             ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-scheduled-stop-398568                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         4s
	  kube-system                 kube-apiserver-scheduled-stop-398568             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-controller-manager-scheduled-stop-398568    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4s
	  kube-system                 kube-scheduler-scheduled-stop-398568             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%)  0 (0%)
	  memory             100Mi (1%)  0 (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From     Message
	  ----     ------                   ----               ----     -------
	  Normal   Starting                 11s                kubelet  Starting kubelet.
	  Warning  CgroupV1                 11s                kubelet  cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  11s (x8 over 11s)  kubelet  Node scheduled-stop-398568 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11s (x8 over 11s)  kubelet  Node scheduled-stop-398568 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11s (x8 over 11s)  kubelet  Node scheduled-stop-398568 status is now: NodeHasSufficientPID
	  Normal   Starting                 4s                 kubelet  Starting kubelet.
	  Warning  CgroupV1                 4s                 kubelet  cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4s                 kubelet  Node scheduled-stop-398568 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4s                 kubelet  Node scheduled-stop-398568 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4s                 kubelet  Node scheduled-stop-398568 status is now: NodeHasSufficientPID
	
	
	==> dmesg <==
	[Nov 2 13:38] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:39] overlayfs: idmapped layers are currently not supported
	[  +2.879003] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:40] overlayfs: idmapped layers are currently not supported
	[ +39.345530] hrtimer: interrupt took 28199838 ns
	[ +11.973880] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:41] overlayfs: idmapped layers are currently not supported
	[  +2.857048] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:42] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:43] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:45] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:49] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:50] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:51] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:52] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:54] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:55] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:56] overlayfs: idmapped layers are currently not supported
	[  +3.515963] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:57] overlayfs: idmapped layers are currently not supported
	[ +24.836033] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:58] overlayfs: idmapped layers are currently not supported
	[ +23.362553] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:59] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:01] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d0f0236a16b6e9b2b71d037d052a3b2a0d6c951eaa83de67b3ad9a0c01850c4f] <==
	{"level":"warn","ts":"2025-11-02T14:01:08.058774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:01:08.070522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:01:08.090703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:01:08.113555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:01:08.151104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:01:08.152943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:01:08.165972Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:01:08.187229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:01:08.200837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:01:08.248043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:01:08.264682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:01:08.278482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:01:08.330498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:01:08.344341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:01:08.367624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:01:08.397444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:01:08.440186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:01:08.471470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:01:08.514672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:01:08.547259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:01:08.562891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:01:08.614672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:01:08.655282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:01:08.683925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:01:08.818841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47396","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:01:16 up  2:43,  0 user,  load average: 1.59, 1.54, 1.84
	Linux scheduled-stop-398568 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [d79f00789e6dbd97b77ef93facb884fafc186506e8de534965ffc60d95cde729] <==
	I1102 14:01:09.981396       1 controller.go:667] quota admission added evaluator for: namespaces
	I1102 14:01:09.989228       1 aggregator.go:171] initial CRD sync complete...
	I1102 14:01:09.989261       1 autoregister_controller.go:144] Starting autoregister controller
	I1102 14:01:09.989269       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1102 14:01:09.989275       1 cache.go:39] Caches are synced for autoregister controller
	I1102 14:01:09.989471       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1102 14:01:09.997654       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 14:01:10.014199       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1102 14:01:10.073983       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1102 14:01:10.083314       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1102 14:01:10.103725       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 14:01:10.105267       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1102 14:01:10.725676       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1102 14:01:10.730321       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1102 14:01:10.730346       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1102 14:01:11.496911       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1102 14:01:11.560421       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1102 14:01:11.677656       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1102 14:01:11.687932       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1102 14:01:11.689167       1 controller.go:667] quota admission added evaluator for: endpoints
	I1102 14:01:11.695399       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1102 14:01:11.991365       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1102 14:01:12.724268       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1102 14:01:12.740582       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1102 14:01:12.749525       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [76da88277aa5fdcb196cdd20f3de12d2fc0b5e97fe2535b4d03751fa08264b06] <==
	I1102 14:01:15.935513       1 controllermanager.go:781] "Started controller" controller="pod-garbage-collector-controller"
	I1102 14:01:15.935570       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I1102 14:01:15.935578       1 shared_informer.go:349] "Waiting for caches to sync" controller="GC"
	I1102 14:01:16.195359       1 controllermanager.go:781] "Started controller" controller="namespace-controller"
	I1102 14:01:16.195461       1 namespace_controller.go:202] "Starting namespace controller" logger="namespace-controller"
	I1102 14:01:16.195549       1 shared_informer.go:349] "Waiting for caches to sync" controller="namespace"
	I1102 14:01:16.382353       1 controllermanager.go:781] "Started controller" controller="disruption-controller"
	I1102 14:01:16.382418       1 disruption.go:457] "Sending events to api server." logger="disruption-controller"
	I1102 14:01:16.382469       1 disruption.go:468] "Starting disruption controller" logger="disruption-controller"
	I1102 14:01:16.382484       1 shared_informer.go:349] "Waiting for caches to sync" controller="disruption"
	I1102 14:01:16.533899       1 controllermanager.go:781] "Started controller" controller="ephemeral-volume-controller"
	I1102 14:01:16.533933       1 controllermanager.go:733] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I1102 14:01:16.533987       1 controller.go:173] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I1102 14:01:16.533994       1 shared_informer.go:349] "Waiting for caches to sync" controller="ephemeral"
	I1102 14:01:16.582518       1 controllermanager.go:781] "Started controller" controller="taint-eviction-controller"
	I1102 14:01:16.582599       1 taint_eviction.go:282] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I1102 14:01:16.582694       1 taint_eviction.go:288] "Sending events to api server" logger="taint-eviction-controller"
	I1102 14:01:16.582713       1 shared_informer.go:349] "Waiting for caches to sync" controller="taint-eviction-controller"
	I1102 14:01:16.734246       1 controllermanager.go:781] "Started controller" controller="service-cidr-controller"
	I1102 14:01:16.734328       1 servicecidrs_controller.go:137] "Starting" logger="service-cidr-controller" controller="service-cidr-controller"
	I1102 14:01:16.734350       1 shared_informer.go:349] "Waiting for caches to sync" controller="service-cidr-controller"
	I1102 14:01:16.886973       1 controllermanager.go:781] "Started controller" controller="bootstrap-signer-controller"
	I1102 14:01:16.887007       1 controllermanager.go:733] "Controller is disabled by a feature gate" controller="device-taint-eviction-controller" requiredFeatureGates=["DynamicResourceAllocation","DRADeviceTaints"]
	I1102 14:01:16.889495       1 shared_informer.go:349] "Waiting for caches to sync" controller="bootstrap_signer"
	I1102 14:01:16.895704       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	
	
	==> kube-scheduler [3bcdf4ab02131a6058adba86e0b8cd46c3c719ec91db9e6a4891ef7d7ffb6524] <==
	E1102 14:01:10.157806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1102 14:01:10.160217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1102 14:01:10.166848       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1102 14:01:10.167774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1102 14:01:10.167936       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1102 14:01:10.168061       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1102 14:01:10.168158       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1102 14:01:10.168258       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1102 14:01:10.168374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1102 14:01:10.168673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1102 14:01:10.175116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1102 14:01:10.175315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1102 14:01:10.175377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1102 14:01:10.175481       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1102 14:01:10.175525       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1102 14:01:10.175563       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1102 14:01:10.175599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1102 14:01:10.175633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1102 14:01:10.175669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1102 14:01:11.003450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1102 14:01:11.017947       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1102 14:01:11.087035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1102 14:01:11.162758       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1102 14:01:11.275143       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1102 14:01:13.554353       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 02 14:01:12 scheduled-stop-398568 kubelet[1354]: I1102 14:01:12.863762    1354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1c1ad9873655fda9caf9486c7e3888a0-ca-certs\") pod \"kube-controller-manager-scheduled-stop-398568\" (UID: \"1c1ad9873655fda9caf9486c7e3888a0\") " pod="kube-system/kube-controller-manager-scheduled-stop-398568"
	Nov 02 14:01:12 scheduled-stop-398568 kubelet[1354]: I1102 14:01:12.863780    1354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1c1ad9873655fda9caf9486c7e3888a0-flexvolume-dir\") pod \"kube-controller-manager-scheduled-stop-398568\" (UID: \"1c1ad9873655fda9caf9486c7e3888a0\") " pod="kube-system/kube-controller-manager-scheduled-stop-398568"
	Nov 02 14:01:12 scheduled-stop-398568 kubelet[1354]: I1102 14:01:12.863803    1354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1c1ad9873655fda9caf9486c7e3888a0-k8s-certs\") pod \"kube-controller-manager-scheduled-stop-398568\" (UID: \"1c1ad9873655fda9caf9486c7e3888a0\") " pod="kube-system/kube-controller-manager-scheduled-stop-398568"
	Nov 02 14:01:12 scheduled-stop-398568 kubelet[1354]: I1102 14:01:12.863822    1354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4df8255f0c7451cb5eacff7113568251-ca-certs\") pod \"kube-apiserver-scheduled-stop-398568\" (UID: \"4df8255f0c7451cb5eacff7113568251\") " pod="kube-system/kube-apiserver-scheduled-stop-398568"
	Nov 02 14:01:12 scheduled-stop-398568 kubelet[1354]: I1102 14:01:12.863839    1354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1c1ad9873655fda9caf9486c7e3888a0-etc-ca-certificates\") pod \"kube-controller-manager-scheduled-stop-398568\" (UID: \"1c1ad9873655fda9caf9486c7e3888a0\") " pod="kube-system/kube-controller-manager-scheduled-stop-398568"
	Nov 02 14:01:12 scheduled-stop-398568 kubelet[1354]: I1102 14:01:12.863855    1354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1c1ad9873655fda9caf9486c7e3888a0-kubeconfig\") pod \"kube-controller-manager-scheduled-stop-398568\" (UID: \"1c1ad9873655fda9caf9486c7e3888a0\") " pod="kube-system/kube-controller-manager-scheduled-stop-398568"
	Nov 02 14:01:12 scheduled-stop-398568 kubelet[1354]: I1102 14:01:12.863878    1354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/040a1a26928fc27d20fb40cac9f985b8-etcd-certs\") pod \"etcd-scheduled-stop-398568\" (UID: \"040a1a26928fc27d20fb40cac9f985b8\") " pod="kube-system/etcd-scheduled-stop-398568"
	Nov 02 14:01:12 scheduled-stop-398568 kubelet[1354]: I1102 14:01:12.863893    1354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/040a1a26928fc27d20fb40cac9f985b8-etcd-data\") pod \"etcd-scheduled-stop-398568\" (UID: \"040a1a26928fc27d20fb40cac9f985b8\") " pod="kube-system/etcd-scheduled-stop-398568"
	Nov 02 14:01:12 scheduled-stop-398568 kubelet[1354]: I1102 14:01:12.909932    1354 kubelet_node_status.go:75] "Attempting to register node" node="scheduled-stop-398568"
	Nov 02 14:01:12 scheduled-stop-398568 kubelet[1354]: I1102 14:01:12.927980    1354 kubelet_node_status.go:124] "Node was previously registered" node="scheduled-stop-398568"
	Nov 02 14:01:12 scheduled-stop-398568 kubelet[1354]: I1102 14:01:12.928093    1354 kubelet_node_status.go:78] "Successfully registered node" node="scheduled-stop-398568"
	Nov 02 14:01:13 scheduled-stop-398568 kubelet[1354]: I1102 14:01:13.627413    1354 apiserver.go:52] "Watching apiserver"
	Nov 02 14:01:13 scheduled-stop-398568 kubelet[1354]: I1102 14:01:13.662548    1354 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 02 14:01:13 scheduled-stop-398568 kubelet[1354]: I1102 14:01:13.794099    1354 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-scheduled-stop-398568"
	Nov 02 14:01:13 scheduled-stop-398568 kubelet[1354]: I1102 14:01:13.794457    1354 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-scheduled-stop-398568"
	Nov 02 14:01:13 scheduled-stop-398568 kubelet[1354]: I1102 14:01:13.794757    1354 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-scheduled-stop-398568"
	Nov 02 14:01:13 scheduled-stop-398568 kubelet[1354]: I1102 14:01:13.795076    1354 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-scheduled-stop-398568"
	Nov 02 14:01:13 scheduled-stop-398568 kubelet[1354]: E1102 14:01:13.844462    1354 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-scheduled-stop-398568\" already exists" pod="kube-system/kube-scheduler-scheduled-stop-398568"
	Nov 02 14:01:13 scheduled-stop-398568 kubelet[1354]: E1102 14:01:13.849822    1354 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-scheduled-stop-398568\" already exists" pod="kube-system/kube-apiserver-scheduled-stop-398568"
	Nov 02 14:01:13 scheduled-stop-398568 kubelet[1354]: E1102 14:01:13.850079    1354 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-scheduled-stop-398568\" already exists" pod="kube-system/etcd-scheduled-stop-398568"
	Nov 02 14:01:13 scheduled-stop-398568 kubelet[1354]: E1102 14:01:13.850285    1354 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-scheduled-stop-398568\" already exists" pod="kube-system/kube-controller-manager-scheduled-stop-398568"
	Nov 02 14:01:13 scheduled-stop-398568 kubelet[1354]: I1102 14:01:13.914148    1354 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-scheduled-stop-398568" podStartSLOduration=1.914128737 podStartE2EDuration="1.914128737s" podCreationTimestamp="2025-11-02 14:01:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 14:01:13.878111452 +0000 UTC m=+1.329074262" watchObservedRunningTime="2025-11-02 14:01:13.914128737 +0000 UTC m=+1.365091539"
	Nov 02 14:01:13 scheduled-stop-398568 kubelet[1354]: I1102 14:01:13.957215    1354 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-scheduled-stop-398568" podStartSLOduration=1.957195439 podStartE2EDuration="1.957195439s" podCreationTimestamp="2025-11-02 14:01:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 14:01:13.914507221 +0000 UTC m=+1.365470023" watchObservedRunningTime="2025-11-02 14:01:13.957195439 +0000 UTC m=+1.408158241"
	Nov 02 14:01:13 scheduled-stop-398568 kubelet[1354]: I1102 14:01:13.957710    1354 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-scheduled-stop-398568" podStartSLOduration=1.957699094 podStartE2EDuration="1.957699094s" podCreationTimestamp="2025-11-02 14:01:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 14:01:13.957443098 +0000 UTC m=+1.408405900" watchObservedRunningTime="2025-11-02 14:01:13.957699094 +0000 UTC m=+1.408661896"
	Nov 02 14:01:14 scheduled-stop-398568 kubelet[1354]: I1102 14:01:14.021387    1354 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-scheduled-stop-398568" podStartSLOduration=2.021365324 podStartE2EDuration="2.021365324s" podCreationTimestamp="2025-11-02 14:01:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 14:01:13.988146575 +0000 UTC m=+1.439109377" watchObservedRunningTime="2025-11-02 14:01:14.021365324 +0000 UTC m=+1.472328118"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p scheduled-stop-398568 -n scheduled-stop-398568
helpers_test.go:269: (dbg) Run:  kubectl --context scheduled-stop-398568 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: storage-provisioner
helpers_test.go:282: ======> post-mortem[TestScheduledStopUnix]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context scheduled-stop-398568 describe pod storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context scheduled-stop-398568 describe pod storage-provisioner: exit status 1 (98.816275ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context scheduled-stop-398568 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "scheduled-stop-398568" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-398568
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-398568: (2.54837631s)
--- FAIL: TestScheduledStopUnix (40.99s)

                                                
                                    
x
+
TestPause/serial/Pause (7.87s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-061518 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-061518 --alsologtostderr -v=5: exit status 80 (2.028025988s)

                                                
                                                
-- stdout --
	* Pausing node pause-061518 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 14:07:41.905716  462186 out.go:360] Setting OutFile to fd 1 ...
	I1102 14:07:41.906656  462186 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:07:41.906699  462186 out.go:374] Setting ErrFile to fd 2...
	I1102 14:07:41.906718  462186 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:07:41.907005  462186 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 14:07:41.907369  462186 out.go:368] Setting JSON to false
	I1102 14:07:41.907423  462186 mustload.go:66] Loading cluster: pause-061518
	I1102 14:07:41.907920  462186 config.go:182] Loaded profile config "pause-061518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:07:41.908903  462186 cli_runner.go:164] Run: docker container inspect pause-061518 --format={{.State.Status}}
	I1102 14:07:41.941891  462186 host.go:66] Checking if "pause-061518" exists ...
	I1102 14:07:41.942246  462186 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:07:42.052324  462186 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-02 14:07:42.041532369 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:07:42.052972  462186 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-061518 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1102 14:07:42.055908  462186 out.go:179] * Pausing node pause-061518 ... 
	I1102 14:07:42.059644  462186 host.go:66] Checking if "pause-061518" exists ...
	I1102 14:07:42.060019  462186 ssh_runner.go:195] Run: systemctl --version
	I1102 14:07:42.060076  462186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-061518
	I1102 14:07:42.084834  462186 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33393 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/pause-061518/id_rsa Username:docker}
	I1102 14:07:42.210569  462186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 14:07:42.228724  462186 pause.go:52] kubelet running: true
	I1102 14:07:42.228904  462186 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1102 14:07:42.477948  462186 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1102 14:07:42.478042  462186 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1102 14:07:42.566037  462186 cri.go:89] found id: "fe648dd9a1b34b78ab6a056cc3038a77c0ee16d62036e11c6dd2b12578580d7f"
	I1102 14:07:42.566057  462186 cri.go:89] found id: "a078e67285aa32739fff8a60075bade751d806e4209f85785a210aa2213f3b4a"
	I1102 14:07:42.566062  462186 cri.go:89] found id: "064259d257d843a7a5ebd3f8f9e506ad1e47483ed8dba25182d5593f8f683514"
	I1102 14:07:42.566066  462186 cri.go:89] found id: "7cc87ca2e0fb4201499f6723b2128c59c8024eb2339a3950d288951bec336aee"
	I1102 14:07:42.566069  462186 cri.go:89] found id: "8714b07decff942fd4025ee48c0276ab6b5dc336243d3cda21750a2ecb2ce226"
	I1102 14:07:42.566073  462186 cri.go:89] found id: "6708daaf97b2c00529bb1eb7cfa613199fd22e3f16eb6ffd5b598a9ef7022242"
	I1102 14:07:42.566076  462186 cri.go:89] found id: "10204b53afac1f042e786f38e2e04d4a30adcfb8105276e3b34d58ff11271c3e"
	I1102 14:07:42.566079  462186 cri.go:89] found id: "2221dd0d7044082f120ee6769cce44bda1a305a571168c55bf2fd5d4afd5a992"
	I1102 14:07:42.566081  462186 cri.go:89] found id: "f00b63ed1dbdc0c8a62dd0a1428a61db16efb887c8bdd767d48001c525792bfb"
	I1102 14:07:42.566087  462186 cri.go:89] found id: "159e020aa661268214cc95c834348d2ec07c1d4118e8376af4a3980a9b57efa7"
	I1102 14:07:42.566090  462186 cri.go:89] found id: "ae957dadee95a103ab6c85ebdb01b8e6adb428cc4da4125285b154f08da38d8f"
	I1102 14:07:42.566092  462186 cri.go:89] found id: "0aa8baef1b15fd455bdb2e283af175835406637d0cb8294c8d464a302dae79e3"
	I1102 14:07:42.566096  462186 cri.go:89] found id: "ac93be8092708e4c95ac3e29db96792147c96910062a80a9d35220b9cd92a3bb"
	I1102 14:07:42.566098  462186 cri.go:89] found id: "94ad339adccb1ec21310445baaf3d9ebec13d5dc8e46a56b760732d96605f7b4"
	I1102 14:07:42.566101  462186 cri.go:89] found id: ""
	I1102 14:07:42.566146  462186 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 14:07:42.578751  462186 retry.go:31] will retry after 272.873146ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T14:07:42Z" level=error msg="open /run/runc: no such file or directory"
	I1102 14:07:42.852202  462186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 14:07:42.867073  462186 pause.go:52] kubelet running: false
	I1102 14:07:42.867135  462186 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1102 14:07:43.049160  462186 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1102 14:07:43.049250  462186 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1102 14:07:43.148622  462186 cri.go:89] found id: "fe648dd9a1b34b78ab6a056cc3038a77c0ee16d62036e11c6dd2b12578580d7f"
	I1102 14:07:43.148644  462186 cri.go:89] found id: "a078e67285aa32739fff8a60075bade751d806e4209f85785a210aa2213f3b4a"
	I1102 14:07:43.148663  462186 cri.go:89] found id: "064259d257d843a7a5ebd3f8f9e506ad1e47483ed8dba25182d5593f8f683514"
	I1102 14:07:43.148668  462186 cri.go:89] found id: "7cc87ca2e0fb4201499f6723b2128c59c8024eb2339a3950d288951bec336aee"
	I1102 14:07:43.148676  462186 cri.go:89] found id: "8714b07decff942fd4025ee48c0276ab6b5dc336243d3cda21750a2ecb2ce226"
	I1102 14:07:43.148679  462186 cri.go:89] found id: "6708daaf97b2c00529bb1eb7cfa613199fd22e3f16eb6ffd5b598a9ef7022242"
	I1102 14:07:43.148683  462186 cri.go:89] found id: "10204b53afac1f042e786f38e2e04d4a30adcfb8105276e3b34d58ff11271c3e"
	I1102 14:07:43.148686  462186 cri.go:89] found id: "2221dd0d7044082f120ee6769cce44bda1a305a571168c55bf2fd5d4afd5a992"
	I1102 14:07:43.148690  462186 cri.go:89] found id: "f00b63ed1dbdc0c8a62dd0a1428a61db16efb887c8bdd767d48001c525792bfb"
	I1102 14:07:43.148700  462186 cri.go:89] found id: "159e020aa661268214cc95c834348d2ec07c1d4118e8376af4a3980a9b57efa7"
	I1102 14:07:43.148710  462186 cri.go:89] found id: "ae957dadee95a103ab6c85ebdb01b8e6adb428cc4da4125285b154f08da38d8f"
	I1102 14:07:43.148713  462186 cri.go:89] found id: "0aa8baef1b15fd455bdb2e283af175835406637d0cb8294c8d464a302dae79e3"
	I1102 14:07:43.148716  462186 cri.go:89] found id: "ac93be8092708e4c95ac3e29db96792147c96910062a80a9d35220b9cd92a3bb"
	I1102 14:07:43.148719  462186 cri.go:89] found id: "94ad339adccb1ec21310445baaf3d9ebec13d5dc8e46a56b760732d96605f7b4"
	I1102 14:07:43.148722  462186 cri.go:89] found id: ""
	I1102 14:07:43.148767  462186 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 14:07:43.162482  462186 retry.go:31] will retry after 333.476964ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T14:07:43Z" level=error msg="open /run/runc: no such file or directory"
	I1102 14:07:43.496991  462186 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 14:07:43.512429  462186 pause.go:52] kubelet running: false
	I1102 14:07:43.512507  462186 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1102 14:07:43.708280  462186 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1102 14:07:43.708359  462186 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1102 14:07:43.816109  462186 cri.go:89] found id: "fe648dd9a1b34b78ab6a056cc3038a77c0ee16d62036e11c6dd2b12578580d7f"
	I1102 14:07:43.816132  462186 cri.go:89] found id: "a078e67285aa32739fff8a60075bade751d806e4209f85785a210aa2213f3b4a"
	I1102 14:07:43.816137  462186 cri.go:89] found id: "064259d257d843a7a5ebd3f8f9e506ad1e47483ed8dba25182d5593f8f683514"
	I1102 14:07:43.816140  462186 cri.go:89] found id: "7cc87ca2e0fb4201499f6723b2128c59c8024eb2339a3950d288951bec336aee"
	I1102 14:07:43.816144  462186 cri.go:89] found id: "8714b07decff942fd4025ee48c0276ab6b5dc336243d3cda21750a2ecb2ce226"
	I1102 14:07:43.816148  462186 cri.go:89] found id: "6708daaf97b2c00529bb1eb7cfa613199fd22e3f16eb6ffd5b598a9ef7022242"
	I1102 14:07:43.816152  462186 cri.go:89] found id: "10204b53afac1f042e786f38e2e04d4a30adcfb8105276e3b34d58ff11271c3e"
	I1102 14:07:43.816155  462186 cri.go:89] found id: "2221dd0d7044082f120ee6769cce44bda1a305a571168c55bf2fd5d4afd5a992"
	I1102 14:07:43.816159  462186 cri.go:89] found id: "f00b63ed1dbdc0c8a62dd0a1428a61db16efb887c8bdd767d48001c525792bfb"
	I1102 14:07:43.816165  462186 cri.go:89] found id: "159e020aa661268214cc95c834348d2ec07c1d4118e8376af4a3980a9b57efa7"
	I1102 14:07:43.816170  462186 cri.go:89] found id: "ae957dadee95a103ab6c85ebdb01b8e6adb428cc4da4125285b154f08da38d8f"
	I1102 14:07:43.816173  462186 cri.go:89] found id: "0aa8baef1b15fd455bdb2e283af175835406637d0cb8294c8d464a302dae79e3"
	I1102 14:07:43.816176  462186 cri.go:89] found id: "ac93be8092708e4c95ac3e29db96792147c96910062a80a9d35220b9cd92a3bb"
	I1102 14:07:43.816181  462186 cri.go:89] found id: "94ad339adccb1ec21310445baaf3d9ebec13d5dc8e46a56b760732d96605f7b4"
	I1102 14:07:43.816184  462186 cri.go:89] found id: ""
	I1102 14:07:43.816245  462186 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 14:07:43.832486  462186 out.go:203] 
	W1102 14:07:43.835295  462186 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T14:07:43Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T14:07:43Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 14:07:43.835316  462186 out.go:285] * 
	* 
	W1102 14:07:43.843400  462186 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 14:07:43.846451  462186 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-061518 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-061518
helpers_test.go:243: (dbg) docker inspect pause-061518:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8649d4ad99369f197a37fdd4a5ed662be649a67bc8e49afe5f05ef39ce02230b",
	        "Created": "2025-11-02T14:05:40.953701441Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 453522,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-02T14:05:41.022893704Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/8649d4ad99369f197a37fdd4a5ed662be649a67bc8e49afe5f05ef39ce02230b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8649d4ad99369f197a37fdd4a5ed662be649a67bc8e49afe5f05ef39ce02230b/hostname",
	        "HostsPath": "/var/lib/docker/containers/8649d4ad99369f197a37fdd4a5ed662be649a67bc8e49afe5f05ef39ce02230b/hosts",
	        "LogPath": "/var/lib/docker/containers/8649d4ad99369f197a37fdd4a5ed662be649a67bc8e49afe5f05ef39ce02230b/8649d4ad99369f197a37fdd4a5ed662be649a67bc8e49afe5f05ef39ce02230b-json.log",
	        "Name": "/pause-061518",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-061518:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-061518",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8649d4ad99369f197a37fdd4a5ed662be649a67bc8e49afe5f05ef39ce02230b",
	                "LowerDir": "/var/lib/docker/overlay2/e8225c844d9998876a6ec627164e4e80f9147facc777ebdefcda74d3ed5755fe-init/diff:/var/lib/docker/overlay2/53058afe6acd7391f639f551e07f446f6a39a991b699aea18507cb21a1652e97/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e8225c844d9998876a6ec627164e4e80f9147facc777ebdefcda74d3ed5755fe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e8225c844d9998876a6ec627164e4e80f9147facc777ebdefcda74d3ed5755fe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e8225c844d9998876a6ec627164e4e80f9147facc777ebdefcda74d3ed5755fe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-061518",
	                "Source": "/var/lib/docker/volumes/pause-061518/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-061518",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-061518",
	                "name.minikube.sigs.k8s.io": "pause-061518",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6fa4a40b42c2b7989c704386b4c22dc958551dc0ef7925e16b398d9a886654cb",
	            "SandboxKey": "/var/run/docker/netns/6fa4a40b42c2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33393"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33394"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33397"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33395"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33396"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-061518": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:b5:0f:f2:01:fc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2f52f7ecc9f93826924e35d391943c94dba69f1c565bc9975430a76e4419fe10",
	                    "EndpointID": "fa43956c0961887e2623ccdd585f18d34a2b8f334af1687cf8790909080fe773",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-061518",
	                        "8649d4ad9936"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-061518 -n pause-061518
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-061518 -n pause-061518: exit status 2 (450.830779ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-061518 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-061518 logs -n 25: (1.912270222s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────┬───────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                     ARGS                                      │    PROFILE    │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────┼───────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p false-143736 sudo systemctl cat cri-docker --no-pager                      │ false-143736  │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p false-143736 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ false-143736  │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p false-143736 sudo cat /usr/lib/systemd/system/cri-docker.service           │ false-143736  │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p false-143736 sudo cri-dockerd --version                                    │ false-143736  │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p false-143736 sudo systemctl status containerd --all --full --no-pager      │ false-143736  │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p false-143736 sudo systemctl cat containerd --no-pager                      │ false-143736  │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p false-143736 sudo cat /lib/systemd/system/containerd.service               │ false-143736  │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p false-143736 sudo cat /etc/containerd/config.toml                          │ false-143736  │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p false-143736 sudo containerd config dump                                   │ false-143736  │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p false-143736 sudo systemctl status crio --all --full --no-pager            │ false-143736  │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p false-143736 sudo systemctl cat crio --no-pager                            │ false-143736  │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p false-143736 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ false-143736  │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p false-143736 sudo crio config                                              │ false-143736  │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ delete  │ -p false-143736                                                               │ false-143736  │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │ 02 Nov 25 14:07 UTC │
	│ pause   │ -p pause-061518 --alsologtostderr -v=5                                        │ pause-061518  │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo cat /etc/nsswitch.conf                                  │ cilium-143736 │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo cat /etc/hosts                                          │ cilium-143736 │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo cat /etc/resolv.conf                                    │ cilium-143736 │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo crictl pods                                             │ cilium-143736 │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo crictl ps --all                                         │ cilium-143736 │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;  │ cilium-143736 │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo ip a s                                                  │ cilium-143736 │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo ip r s                                                  │ cilium-143736 │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo iptables-save                                           │ cilium-143736 │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo iptables -t nat -L -n -v                                │ cilium-143736 │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────┴───────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 14:07:37
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 14:07:37.866343  461820 out.go:360] Setting OutFile to fd 1 ...
	I1102 14:07:37.866460  461820 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:07:37.866529  461820 out.go:374] Setting ErrFile to fd 2...
	I1102 14:07:37.866542  461820 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:07:37.866898  461820 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 14:07:37.867393  461820 out.go:368] Setting JSON to false
	I1102 14:07:37.868308  461820 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10210,"bootTime":1762082248,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1102 14:07:37.868382  461820 start.go:143] virtualization:  
	I1102 14:07:37.871966  461820 out.go:179] * [false-143736] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1102 14:07:37.875850  461820 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 14:07:37.875961  461820 notify.go:221] Checking for updates...
	I1102 14:07:37.881697  461820 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 14:07:37.884582  461820 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-293314/kubeconfig
	I1102 14:07:37.887451  461820 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-293314/.minikube
	I1102 14:07:37.890344  461820 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1102 14:07:37.893381  461820 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 14:07:37.896848  461820 config.go:182] Loaded profile config "pause-061518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:07:37.896961  461820 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 14:07:37.927947  461820 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1102 14:07:37.928073  461820 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:07:37.986102  461820 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-02 14:07:37.975412553 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:07:37.986209  461820 docker.go:319] overlay module found
	I1102 14:07:37.989650  461820 out.go:179] * Using the docker driver based on user configuration
	I1102 14:07:37.992529  461820 start.go:309] selected driver: docker
	I1102 14:07:37.992552  461820 start.go:930] validating driver "docker" against <nil>
	I1102 14:07:37.992567  461820 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 14:07:37.996147  461820 out.go:203] 
	W1102 14:07:37.999044  461820 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1102 14:07:38.001911  461820 out.go:203] 
	W1102 14:07:36.775606  458767 pod_ready.go:104] pod "coredns-66bc5c9577-q47gx" is not "Ready", error: node "pause-061518" hosting pod "coredns-66bc5c9577-q47gx" is not "Ready" (will retry)
	W1102 14:07:39.275724  458767 pod_ready.go:104] pod "coredns-66bc5c9577-q47gx" is not "Ready", error: node "pause-061518" hosting pod "coredns-66bc5c9577-q47gx" is not "Ready" (will retry)
	I1102 14:07:40.283298  458767 pod_ready.go:94] pod "coredns-66bc5c9577-q47gx" is "Ready"
	I1102 14:07:40.283323  458767 pod_ready.go:86] duration metric: took 18.515057169s for pod "coredns-66bc5c9577-q47gx" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:07:40.311054  458767 pod_ready.go:83] waiting for pod "etcd-pause-061518" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:07:40.322066  458767 pod_ready.go:94] pod "etcd-pause-061518" is "Ready"
	I1102 14:07:40.322102  458767 pod_ready.go:86] duration metric: took 11.014706ms for pod "etcd-pause-061518" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:07:40.334892  458767 pod_ready.go:83] waiting for pod "kube-apiserver-pause-061518" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:07:40.356506  458767 pod_ready.go:94] pod "kube-apiserver-pause-061518" is "Ready"
	I1102 14:07:40.356531  458767 pod_ready.go:86] duration metric: took 21.615498ms for pod "kube-apiserver-pause-061518" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:07:40.404300  458767 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-061518" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:07:40.472850  458767 pod_ready.go:94] pod "kube-controller-manager-pause-061518" is "Ready"
	I1102 14:07:40.472876  458767 pod_ready.go:86] duration metric: took 68.552176ms for pod "kube-controller-manager-pause-061518" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:07:40.673913  458767 pod_ready.go:83] waiting for pod "kube-proxy-dhvp4" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:07:41.072981  458767 pod_ready.go:94] pod "kube-proxy-dhvp4" is "Ready"
	I1102 14:07:41.073006  458767 pod_ready.go:86] duration metric: took 399.0647ms for pod "kube-proxy-dhvp4" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:07:41.273449  458767 pod_ready.go:83] waiting for pod "kube-scheduler-pause-061518" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:07:41.673411  458767 pod_ready.go:94] pod "kube-scheduler-pause-061518" is "Ready"
	I1102 14:07:41.673437  458767 pod_ready.go:86] duration metric: took 399.947905ms for pod "kube-scheduler-pause-061518" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:07:41.673450  458767 pod_ready.go:40] duration metric: took 19.912145908s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 14:07:41.767023  458767 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1102 14:07:41.772369  458767 out.go:179] * Done! kubectl is now configured to use "pause-061518" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 02 14:07:11 pause-061518 crio[2141]: time="2025-11-02T14:07:11.728857092Z" level=info msg="Started container" PID=2265 containerID=a078e67285aa32739fff8a60075bade751d806e4209f85785a210aa2213f3b4a description=kube-system/kube-scheduler-pause-061518/kube-scheduler id=b68286f1-3c6f-4eec-af2d-7533fa1e1ab6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5c2ee95b1113cf6501b895614f46f2825904afbe5cfc466ec41d9fae02e45950
	Nov 02 14:07:16 pause-061518 crio[2141]: time="2025-11-02T14:07:16.826739363Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.12.1" id=39031a5e-8294-434a-b166-0150eefadd02 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:07:16 pause-061518 crio[2141]: time="2025-11-02T14:07:16.828194204Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.12.1" id=0df86428-c28f-4ece-b60d-1b939dec8237 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:07:16 pause-061518 crio[2141]: time="2025-11-02T14:07:16.829441962Z" level=info msg="Creating container: kube-system/coredns-66bc5c9577-q47gx/coredns" id=93f818b5-d31b-420c-a7d5-f67fba6d21a6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:07:16 pause-061518 crio[2141]: time="2025-11-02T14:07:16.829552347Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:07:16 pause-061518 crio[2141]: time="2025-11-02T14:07:16.846708098Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:07:16 pause-061518 crio[2141]: time="2025-11-02T14:07:16.848067283Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:07:16 pause-061518 crio[2141]: time="2025-11-02T14:07:16.908189689Z" level=info msg="Created container fe648dd9a1b34b78ab6a056cc3038a77c0ee16d62036e11c6dd2b12578580d7f: kube-system/coredns-66bc5c9577-q47gx/coredns" id=93f818b5-d31b-420c-a7d5-f67fba6d21a6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:07:16 pause-061518 crio[2141]: time="2025-11-02T14:07:16.909083496Z" level=info msg="Starting container: fe648dd9a1b34b78ab6a056cc3038a77c0ee16d62036e11c6dd2b12578580d7f" id=2fb8c4d5-43dd-44d8-a7e1-82f513eec1e9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 14:07:16 pause-061518 crio[2141]: time="2025-11-02T14:07:16.91091276Z" level=info msg="Started container" PID=2522 containerID=fe648dd9a1b34b78ab6a056cc3038a77c0ee16d62036e11c6dd2b12578580d7f description=kube-system/coredns-66bc5c9577-q47gx/coredns id=2fb8c4d5-43dd-44d8-a7e1-82f513eec1e9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=500b0edc13fbbe9d988284d7c05660067d5557cfcf22cc073c450ca1c48e98cb
	Nov 02 14:07:22 pause-061518 crio[2141]: time="2025-11-02T14:07:22.258249062Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:07:22 pause-061518 crio[2141]: time="2025-11-02T14:07:22.262595289Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:07:22 pause-061518 crio[2141]: time="2025-11-02T14:07:22.262780859Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 14:07:22 pause-061518 crio[2141]: time="2025-11-02T14:07:22.262861779Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:07:22 pause-061518 crio[2141]: time="2025-11-02T14:07:22.279895206Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:07:22 pause-061518 crio[2141]: time="2025-11-02T14:07:22.280074311Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 14:07:22 pause-061518 crio[2141]: time="2025-11-02T14:07:22.280153639Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:07:22 pause-061518 crio[2141]: time="2025-11-02T14:07:22.284216301Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:07:22 pause-061518 crio[2141]: time="2025-11-02T14:07:22.284398876Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 14:07:22 pause-061518 crio[2141]: time="2025-11-02T14:07:22.284490643Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:07:22 pause-061518 crio[2141]: time="2025-11-02T14:07:22.295738001Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:07:22 pause-061518 crio[2141]: time="2025-11-02T14:07:22.295936585Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 14:07:22 pause-061518 crio[2141]: time="2025-11-02T14:07:22.296054954Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:07:22 pause-061518 crio[2141]: time="2025-11-02T14:07:22.300135938Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:07:22 pause-061518 crio[2141]: time="2025-11-02T14:07:22.300296433Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	fe648dd9a1b34       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   28 seconds ago       Running             coredns                   1                   500b0edc13fbb       coredns-66bc5c9577-q47gx               kube-system
	a078e67285aa3       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   33 seconds ago       Running             kube-scheduler            1                   5c2ee95b1113c       kube-scheduler-pause-061518            kube-system
	064259d257d84       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   33 seconds ago       Running             kube-proxy                1                   e0698ff7db202       kube-proxy-dhvp4                       kube-system
	7cc87ca2e0fb4       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   33 seconds ago       Running             kube-controller-manager   1                   4d2173fa26e88       kube-controller-manager-pause-061518   kube-system
	8714b07decff9       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   33 seconds ago       Running             etcd                      1                   4d1440f91d289       etcd-pause-061518                      kube-system
	6708daaf97b2c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   33 seconds ago       Running             kindnet-cni               1                   63dbd68b7dd3b       kindnet-gzstt                          kube-system
	10204b53afac1       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   33 seconds ago       Running             kube-apiserver            1                   7b28131227dd0       kube-apiserver-pause-061518            kube-system
	2221dd0d70440       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   46 seconds ago       Exited              coredns                   0                   500b0edc13fbb       coredns-66bc5c9577-q47gx               kube-system
	f00b63ed1dbdc       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   63dbd68b7dd3b       kindnet-gzstt                          kube-system
	159e020aa6612       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   e0698ff7db202       kube-proxy-dhvp4                       kube-system
	ae957dadee95a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   4d2173fa26e88       kube-controller-manager-pause-061518   kube-system
	0aa8baef1b15f       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   7b28131227dd0       kube-apiserver-pause-061518            kube-system
	ac93be8092708       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   5c2ee95b1113c       kube-scheduler-pause-061518            kube-system
	94ad339adccb1       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   4d1440f91d289       etcd-pause-061518                      kube-system
	
	
	==> coredns [2221dd0d7044082f120ee6769cce44bda1a305a571168c55bf2fd5d4afd5a992] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33736 - 25424 "HINFO IN 3448274736711667910.7882703770854991141. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.044022878s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [fe648dd9a1b34b78ab6a056cc3038a77c0ee16d62036e11c6dd2b12578580d7f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41548 - 46122 "HINFO IN 4347422264761728212.3955537798875286010. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017462627s
	
	
	==> describe nodes <==
	Name:               pause-061518
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-061518
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a
	                    minikube.k8s.io/name=pause-061518
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_02T14_06_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 02 Nov 2025 14:06:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-061518
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 02 Nov 2025 14:07:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 02 Nov 2025 14:07:40 +0000   Sun, 02 Nov 2025 14:06:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 02 Nov 2025 14:07:40 +0000   Sun, 02 Nov 2025 14:06:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 02 Nov 2025 14:07:40 +0000   Sun, 02 Nov 2025 14:06:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 02 Nov 2025 14:07:40 +0000   Sun, 02 Nov 2025 14:07:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-061518
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                4107c713-1cff-4c43-b6da-f922bb5196d7
	  Boot ID:                    4c303cf4-c0b6-4c0d-aa1a-d7509feb15e7
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-q47gx                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     89s
	  kube-system                 etcd-pause-061518                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         94s
	  kube-system                 kindnet-gzstt                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      89s
	  kube-system                 kube-apiserver-pause-061518             250m (12%)    0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 kube-controller-manager-pause-061518    200m (10%)    0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 kube-proxy-dhvp4                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-scheduler-pause-061518             100m (5%)     0 (0%)      0 (0%)           0 (0%)         94s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age               From             Message
	  ----     ------                   ----              ----             -------
	  Normal   Starting                 87s               kube-proxy       
	  Normal   Starting                 25s               kube-proxy       
	  Normal   Starting                 95s               kubelet          Starting kubelet.
	  Warning  CgroupV1                 95s               kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  94s               kubelet          Node pause-061518 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    94s               kubelet          Node pause-061518 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     94s               kubelet          Node pause-061518 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           90s               node-controller  Node pause-061518 event: Registered Node pause-061518 in Controller
	  Warning  ContainerGCFailed        35s               kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             34s               kubelet          Node pause-061518 status is now: NodeNotReady
	  Normal   RegisteredNode           22s               node-controller  Node pause-061518 event: Registered Node pause-061518 in Controller
	  Normal   NodeReady                5s (x2 over 48s)  kubelet          Node pause-061518 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 2 13:41] overlayfs: idmapped layers are currently not supported
	[  +2.857048] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:42] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:43] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:45] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:49] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:50] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:51] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:52] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:54] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:55] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:56] overlayfs: idmapped layers are currently not supported
	[  +3.515963] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:57] overlayfs: idmapped layers are currently not supported
	[ +24.836033] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:58] overlayfs: idmapped layers are currently not supported
	[ +23.362553] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:59] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:01] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:02] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:03] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:04] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:06] overlayfs: idmapped layers are currently not supported
	[ +50.469589] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 2 14:07] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [8714b07decff942fd4025ee48c0276ab6b5dc336243d3cda21750a2ecb2ce226] <==
	{"level":"warn","ts":"2025-11-02T14:07:17.061723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:17.094501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:17.115929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:17.138868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:17.187578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:17.214160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:17.256092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:17.326953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:17.365194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:17.422854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:17.469974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:17.519035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:17.549502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:17.591564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:17.647793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:17.694840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:17.727119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:17.786814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:17.882670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:17.922732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:17.946862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:18.001164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:18.069356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:18.083858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:18.235607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46144","server-name":"","error":"EOF"}
	
	
	==> etcd [94ad339adccb1ec21310445baaf3d9ebec13d5dc8e46a56b760732d96605f7b4] <==
	{"level":"warn","ts":"2025-11-02T14:06:05.928867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:06:05.957625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:06:05.993296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:06:06.038726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:06:06.049885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:06:06.078754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:06:06.239180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35936","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-02T14:07:03.084499Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-02T14:07:03.084569Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-061518","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-11-02T14:07:03.084668Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-02T14:07:03.230318Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-02T14:07:03.230413Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-02T14:07:03.230470Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-11-02T14:07:03.230579Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-02T14:07:03.230599Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-02T14:07:03.230662Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-02T14:07:03.230719Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-02T14:07:03.230756Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-02T14:07:03.230830Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-02T14:07:03.230851Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-02T14:07:03.230858Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-02T14:07:03.233870Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-11-02T14:07:03.233947Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-02T14:07:03.233978Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-02T14:07:03.233986Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-061518","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 14:07:45 up  2:50,  0 user,  load average: 4.39, 3.56, 2.68
	Linux pause-061518 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6708daaf97b2c00529bb1eb7cfa613199fd22e3f16eb6ffd5b598a9ef7022242] <==
	I1102 14:07:11.925961       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1102 14:07:11.968897       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1102 14:07:11.969077       1 main.go:148] setting mtu 1500 for CNI 
	I1102 14:07:11.969091       1 main.go:178] kindnetd IP family: "ipv4"
	I1102 14:07:11.969106       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-02T14:07:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1102 14:07:12.277825       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1102 14:07:12.280789       1 controller.go:381] "Waiting for informer caches to sync"
	I1102 14:07:12.280888       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1102 14:07:12.281119       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1102 14:07:19.811291       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1102 14:07:19.811387       1 metrics.go:72] Registering metrics
	I1102 14:07:19.811477       1 controller.go:711] "Syncing nftables rules"
	I1102 14:07:22.257690       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1102 14:07:22.257845       1 main.go:301] handling current node
	I1102 14:07:32.257804       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1102 14:07:32.257862       1 main.go:301] handling current node
	I1102 14:07:42.257992       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1102 14:07:42.258050       1 main.go:301] handling current node
	
	
	==> kindnet [f00b63ed1dbdc0c8a62dd0a1428a61db16efb887c8bdd767d48001c525792bfb] <==
	I1102 14:06:17.217683       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1102 14:06:17.223123       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1102 14:06:17.223266       1 main.go:148] setting mtu 1500 for CNI 
	I1102 14:06:17.223279       1 main.go:178] kindnetd IP family: "ipv4"
	I1102 14:06:17.223291       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-02T14:06:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1102 14:06:17.423167       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1102 14:06:17.423188       1 controller.go:381] "Waiting for informer caches to sync"
	I1102 14:06:17.423196       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1102 14:06:17.423522       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1102 14:06:47.420556       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1102 14:06:47.424137       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1102 14:06:47.424319       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1102 14:06:47.424452       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1102 14:06:48.723372       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1102 14:06:48.723410       1 metrics.go:72] Registering metrics
	I1102 14:06:48.723536       1 controller.go:711] "Syncing nftables rules"
	I1102 14:06:57.427809       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1102 14:06:57.427865       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0aa8baef1b15fd455bdb2e283af175835406637d0cb8294c8d464a302dae79e3] <==
	W1102 14:07:03.105603       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.110381       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.110538       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.110686       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.110798       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.110854       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.110920       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.110999       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.111074       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.111147       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.111216       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.111283       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.111352       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.111433       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.111512       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.111557       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.111600       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.111697       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.111802       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.111856       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.111945       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.111951       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.112022       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.111293       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.112099       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [10204b53afac1f042e786f38e2e04d4a30adcfb8105276e3b34d58ff11271c3e] <==
	I1102 14:07:19.743972       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1102 14:07:19.755707       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1102 14:07:19.757328       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1102 14:07:19.757355       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1102 14:07:19.757484       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1102 14:07:19.758457       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1102 14:07:19.758573       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1102 14:07:19.758647       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1102 14:07:19.778042       1 aggregator.go:171] initial CRD sync complete...
	I1102 14:07:19.778076       1 autoregister_controller.go:144] Starting autoregister controller
	I1102 14:07:19.778083       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1102 14:07:19.778091       1 cache.go:39] Caches are synced for autoregister controller
	I1102 14:07:19.801084       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1102 14:07:19.801324       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1102 14:07:19.801408       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1102 14:07:19.801664       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1102 14:07:19.832200       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1102 14:07:19.836102       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1102 14:07:20.403021       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1102 14:07:21.769292       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1102 14:07:23.122268       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1102 14:07:23.241941       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1102 14:07:23.455639       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1102 14:07:23.510000       1 controller.go:667] quota admission added evaluator for: endpoints
	I1102 14:07:23.555154       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [7cc87ca2e0fb4201499f6723b2128c59c8024eb2339a3950d288951bec336aee] <==
	I1102 14:07:23.091713       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1102 14:07:23.091766       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1102 14:07:23.091795       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1102 14:07:23.091013       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1102 14:07:23.098029       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1102 14:07:23.099114       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1102 14:07:23.110343       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 14:07:23.111692       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1102 14:07:23.127650       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1102 14:07:23.128002       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1102 14:07:23.128145       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-061518"
	I1102 14:07:23.128223       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1102 14:07:23.129702       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1102 14:07:23.135914       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1102 14:07:23.136032       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1102 14:07:23.159199       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1102 14:07:23.159555       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1102 14:07:23.159627       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1102 14:07:23.166015       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 14:07:23.166307       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 14:07:23.166325       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1102 14:07:23.166331       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1102 14:07:23.166704       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1102 14:07:23.167728       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1102 14:07:43.131207       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [ae957dadee95a103ab6c85ebdb01b8e6adb428cc4da4125285b154f08da38d8f] <==
	I1102 14:06:15.377783       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1102 14:06:15.379132       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1102 14:06:15.327369       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1102 14:06:15.330809       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1102 14:06:15.330824       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1102 14:06:15.385684       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1102 14:06:15.385970       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-061518"
	I1102 14:06:15.386011       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1102 14:06:15.344619       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1102 14:06:15.386178       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1102 14:06:15.346504       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1102 14:06:15.386494       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1102 14:06:15.387005       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1102 14:06:15.391329       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 14:06:15.393052       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1102 14:06:15.399850       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1102 14:06:15.414080       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1102 14:06:15.424606       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 14:06:15.427773       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-061518" podCIDRs=["10.244.0.0/24"]
	I1102 14:06:15.482365       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1102 14:06:15.484288       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 14:06:15.514755       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 14:06:15.514787       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1102 14:06:15.514796       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1102 14:07:00.392441       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [064259d257d843a7a5ebd3f8f9e506ad1e47483ed8dba25182d5593f8f683514] <==
	I1102 14:07:20.011035       1 server_linux.go:53] "Using iptables proxy"
	I1102 14:07:20.189882       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1102 14:07:20.290038       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1102 14:07:20.290081       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1102 14:07:20.290147       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1102 14:07:20.451058       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 14:07:20.451193       1 server_linux.go:132] "Using iptables Proxier"
	I1102 14:07:20.476428       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1102 14:07:20.476815       1 server.go:527] "Version info" version="v1.34.1"
	I1102 14:07:20.477074       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 14:07:20.484651       1 config.go:200] "Starting service config controller"
	I1102 14:07:20.490510       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1102 14:07:20.490657       1 config.go:106] "Starting endpoint slice config controller"
	I1102 14:07:20.490704       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1102 14:07:20.490743       1 config.go:403] "Starting serviceCIDR config controller"
	I1102 14:07:20.490785       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1102 14:07:20.491570       1 config.go:309] "Starting node config controller"
	I1102 14:07:20.491629       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1102 14:07:20.491659       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1102 14:07:20.591376       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1102 14:07:20.591467       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1102 14:07:20.591485       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [159e020aa661268214cc95c834348d2ec07c1d4118e8376af4a3980a9b57efa7] <==
	I1102 14:06:17.204615       1 server_linux.go:53] "Using iptables proxy"
	I1102 14:06:17.309402       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1102 14:06:17.429211       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1102 14:06:17.429249       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1102 14:06:17.429322       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1102 14:06:17.577234       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 14:06:17.577368       1 server_linux.go:132] "Using iptables Proxier"
	I1102 14:06:17.614466       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1102 14:06:17.614893       1 server.go:527] "Version info" version="v1.34.1"
	I1102 14:06:17.615108       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 14:06:17.616510       1 config.go:200] "Starting service config controller"
	I1102 14:06:17.616582       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1102 14:06:17.616625       1 config.go:106] "Starting endpoint slice config controller"
	I1102 14:06:17.616686       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1102 14:06:17.616723       1 config.go:403] "Starting serviceCIDR config controller"
	I1102 14:06:17.616766       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1102 14:06:17.638011       1 config.go:309] "Starting node config controller"
	I1102 14:06:17.638098       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1102 14:06:17.638107       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1102 14:06:17.730256       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1102 14:06:17.730325       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1102 14:06:17.719372       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [a078e67285aa32739fff8a60075bade751d806e4209f85785a210aa2213f3b4a] <==
	I1102 14:07:18.073880       1 serving.go:386] Generated self-signed cert in-memory
	I1102 14:07:19.886302       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1102 14:07:19.886339       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 14:07:19.915067       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1102 14:07:19.915177       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1102 14:07:19.915212       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1102 14:07:19.915242       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1102 14:07:19.922933       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1102 14:07:19.922954       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1102 14:07:19.924020       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 14:07:19.924049       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 14:07:20.019085       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1102 14:07:20.024282       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 14:07:20.024376       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kube-scheduler [ac93be8092708e4c95ac3e29db96792147c96910062a80a9d35220b9cd92a3bb] <==
	E1102 14:06:08.827379       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1102 14:06:08.827426       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1102 14:06:08.827472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1102 14:06:08.827518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1102 14:06:08.827557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1102 14:06:08.827599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1102 14:06:08.827693       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1102 14:06:08.827736       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1102 14:06:08.827879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1102 14:06:08.827924       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1102 14:06:08.827976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1102 14:06:08.828020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1102 14:06:08.828065       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1102 14:06:08.828264       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1102 14:06:08.800617       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1102 14:06:08.836624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1102 14:06:08.836819       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1102 14:06:08.842731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1102 14:06:10.310729       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 14:07:03.082326       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1102 14:07:03.082425       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1102 14:07:03.082437       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1102 14:07:03.082461       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 14:07:03.082549       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1102 14:07:03.082564       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 02 14:07:11 pause-061518 kubelet[1353]: E1102 14:07:11.329001    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-gzstt\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="9f4ffddb-2c0f-41a8-a925-a9e6fccedf09" pod="kube-system/kindnet-gzstt"
	Nov 02 14:07:11 pause-061518 kubelet[1353]: E1102 14:07:11.329189    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-q47gx\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="575902e5-17fa-4d63-9aed-6ea6c29955fa" pod="kube-system/coredns-66bc5c9577-q47gx"
	Nov 02 14:07:11 pause-061518 kubelet[1353]: I1102 14:07:11.368209    1353 scope.go:117] "RemoveContainer" containerID="ae957dadee95a103ab6c85ebdb01b8e6adb428cc4da4125285b154f08da38d8f"
	Nov 02 14:07:11 pause-061518 kubelet[1353]: E1102 14:07:11.368864    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-061518\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="942466de5a9d735234539dbd8eaf0cd1" pod="kube-system/etcd-pause-061518"
	Nov 02 14:07:11 pause-061518 kubelet[1353]: E1102 14:07:11.369103    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-061518\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="6322eebbe2e85d483168aa95bb946270" pod="kube-system/kube-apiserver-pause-061518"
	Nov 02 14:07:11 pause-061518 kubelet[1353]: E1102 14:07:11.369292    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-061518\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="29904a253215adf7d55276c689cee701" pod="kube-system/kube-controller-manager-pause-061518"
	Nov 02 14:07:11 pause-061518 kubelet[1353]: E1102 14:07:11.369499    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dhvp4\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a748cacd-1a6c-44da-b8a3-cf76af722681" pod="kube-system/kube-proxy-dhvp4"
	Nov 02 14:07:11 pause-061518 kubelet[1353]: E1102 14:07:11.369651    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-gzstt\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="9f4ffddb-2c0f-41a8-a925-a9e6fccedf09" pod="kube-system/kindnet-gzstt"
	Nov 02 14:07:11 pause-061518 kubelet[1353]: E1102 14:07:11.369831    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-q47gx\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="575902e5-17fa-4d63-9aed-6ea6c29955fa" pod="kube-system/coredns-66bc5c9577-q47gx"
	Nov 02 14:07:11 pause-061518 kubelet[1353]: E1102 14:07:11.369991    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-061518\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="bcbad24fdddf8b5430da392c28184fb3" pod="kube-system/kube-scheduler-pause-061518"
	Nov 02 14:07:11 pause-061518 kubelet[1353]: I1102 14:07:11.388097    1353 scope.go:117] "RemoveContainer" containerID="ac93be8092708e4c95ac3e29db96792147c96910062a80a9d35220b9cd92a3bb"
	Nov 02 14:07:11 pause-061518 kubelet[1353]: E1102 14:07:11.388891    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dhvp4\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a748cacd-1a6c-44da-b8a3-cf76af722681" pod="kube-system/kube-proxy-dhvp4"
	Nov 02 14:07:11 pause-061518 kubelet[1353]: E1102 14:07:11.389079    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-gzstt\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="9f4ffddb-2c0f-41a8-a925-a9e6fccedf09" pod="kube-system/kindnet-gzstt"
	Nov 02 14:07:11 pause-061518 kubelet[1353]: E1102 14:07:11.389240    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-q47gx\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="575902e5-17fa-4d63-9aed-6ea6c29955fa" pod="kube-system/coredns-66bc5c9577-q47gx"
	Nov 02 14:07:11 pause-061518 kubelet[1353]: E1102 14:07:11.389409    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-061518\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="bcbad24fdddf8b5430da392c28184fb3" pod="kube-system/kube-scheduler-pause-061518"
	Nov 02 14:07:11 pause-061518 kubelet[1353]: E1102 14:07:11.389608    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-061518\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="942466de5a9d735234539dbd8eaf0cd1" pod="kube-system/etcd-pause-061518"
	Nov 02 14:07:11 pause-061518 kubelet[1353]: E1102 14:07:11.389769    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-061518\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="6322eebbe2e85d483168aa95bb946270" pod="kube-system/kube-apiserver-pause-061518"
	Nov 02 14:07:11 pause-061518 kubelet[1353]: E1102 14:07:11.389955    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-061518\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="29904a253215adf7d55276c689cee701" pod="kube-system/kube-controller-manager-pause-061518"
	Nov 02 14:07:11 pause-061518 kubelet[1353]: I1102 14:07:11.988683    1353 setters.go:543] "Node became not ready" node="pause-061518" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-02T14:07:11Z","lastTransitionTime":"2025-11-02T14:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized"}
	Nov 02 14:07:12 pause-061518 kubelet[1353]: E1102 14:07:12.827032    1353 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized" pod="kube-system/coredns-66bc5c9577-q47gx" podUID="575902e5-17fa-4d63-9aed-6ea6c29955fa"
	Nov 02 14:07:14 pause-061518 kubelet[1353]: E1102 14:07:14.825064    1353 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized" pod="kube-system/coredns-66bc5c9577-q47gx" podUID="575902e5-17fa-4d63-9aed-6ea6c29955fa"
	Nov 02 14:07:16 pause-061518 kubelet[1353]: I1102 14:07:16.825721    1353 scope.go:117] "RemoveContainer" containerID="2221dd0d7044082f120ee6769cce44bda1a305a571168c55bf2fd5d4afd5a992"
	Nov 02 14:07:42 pause-061518 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 02 14:07:42 pause-061518 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 02 14:07:42 pause-061518 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-061518 -n pause-061518
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-061518 -n pause-061518: exit status 2 (477.099216ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-061518 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-061518
helpers_test.go:243: (dbg) docker inspect pause-061518:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8649d4ad99369f197a37fdd4a5ed662be649a67bc8e49afe5f05ef39ce02230b",
	        "Created": "2025-11-02T14:05:40.953701441Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 453522,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-02T14:05:41.022893704Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/8649d4ad99369f197a37fdd4a5ed662be649a67bc8e49afe5f05ef39ce02230b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8649d4ad99369f197a37fdd4a5ed662be649a67bc8e49afe5f05ef39ce02230b/hostname",
	        "HostsPath": "/var/lib/docker/containers/8649d4ad99369f197a37fdd4a5ed662be649a67bc8e49afe5f05ef39ce02230b/hosts",
	        "LogPath": "/var/lib/docker/containers/8649d4ad99369f197a37fdd4a5ed662be649a67bc8e49afe5f05ef39ce02230b/8649d4ad99369f197a37fdd4a5ed662be649a67bc8e49afe5f05ef39ce02230b-json.log",
	        "Name": "/pause-061518",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-061518:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-061518",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8649d4ad99369f197a37fdd4a5ed662be649a67bc8e49afe5f05ef39ce02230b",
	                "LowerDir": "/var/lib/docker/overlay2/e8225c844d9998876a6ec627164e4e80f9147facc777ebdefcda74d3ed5755fe-init/diff:/var/lib/docker/overlay2/53058afe6acd7391f639f551e07f446f6a39a991b699aea18507cb21a1652e97/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e8225c844d9998876a6ec627164e4e80f9147facc777ebdefcda74d3ed5755fe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e8225c844d9998876a6ec627164e4e80f9147facc777ebdefcda74d3ed5755fe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e8225c844d9998876a6ec627164e4e80f9147facc777ebdefcda74d3ed5755fe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-061518",
	                "Source": "/var/lib/docker/volumes/pause-061518/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-061518",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-061518",
	                "name.minikube.sigs.k8s.io": "pause-061518",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6fa4a40b42c2b7989c704386b4c22dc958551dc0ef7925e16b398d9a886654cb",
	            "SandboxKey": "/var/run/docker/netns/6fa4a40b42c2",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33393"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33394"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33397"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33395"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33396"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-061518": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:b5:0f:f2:01:fc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2f52f7ecc9f93826924e35d391943c94dba69f1c565bc9975430a76e4419fe10",
	                    "EndpointID": "fa43956c0961887e2623ccdd585f18d34a2b8f334af1687cf8790909080fe773",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-061518",
	                        "8649d4ad9936"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-061518 -n pause-061518
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-061518 -n pause-061518: exit status 2 (483.753155ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p pause-061518 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p pause-061518 logs -n 25: (1.74858619s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                    ARGS                                                    │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-143736 sudo systemctl status kubelet --all --full --no-pager                                     │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo systemctl cat kubelet --no-pager                                                     │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo journalctl -xeu kubelet --all --full --no-pager                                      │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo cat /etc/kubernetes/kubelet.conf                                                     │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo cat /var/lib/kubelet/config.yaml                                                     │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo systemctl status docker --all --full --no-pager                                      │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo systemctl cat docker --no-pager                                                      │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo cat /etc/docker/daemon.json                                                          │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo docker system info                                                                   │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo systemctl status cri-docker --all --full --no-pager                                  │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo systemctl cat cri-docker --no-pager                                                  │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                             │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo cat /usr/lib/systemd/system/cri-docker.service                                       │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo cri-dockerd --version                                                                │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo systemctl status containerd --all --full --no-pager                                  │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo systemctl cat containerd --no-pager                                                  │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo cat /lib/systemd/system/containerd.service                                           │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo cat /etc/containerd/config.toml                                                      │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo containerd config dump                                                               │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo systemctl status crio --all --full --no-pager                                        │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo systemctl cat crio --no-pager                                                        │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                              │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo crio config                                                                          │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ delete  │ -p cilium-143736                                                                                           │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │ 02 Nov 25 14:07 UTC │
	│ start   │ -p force-systemd-env-263133 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-env-263133 │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 14:07:47
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 14:07:47.140499  463180 out.go:360] Setting OutFile to fd 1 ...
	I1102 14:07:47.140709  463180 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:07:47.140740  463180 out.go:374] Setting ErrFile to fd 2...
	I1102 14:07:47.140761  463180 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:07:47.141055  463180 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 14:07:47.141540  463180 out.go:368] Setting JSON to false
	I1102 14:07:47.142475  463180 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10220,"bootTime":1762082248,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1102 14:07:47.142581  463180 start.go:143] virtualization:  
	I1102 14:07:47.146229  463180 out.go:179] * [force-systemd-env-263133] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1102 14:07:47.149410  463180 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 14:07:47.149482  463180 notify.go:221] Checking for updates...
	I1102 14:07:47.156506  463180 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 14:07:47.159532  463180 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-293314/kubeconfig
	I1102 14:07:47.163130  463180 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-293314/.minikube
	I1102 14:07:47.165923  463180 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1102 14:07:47.168732  463180 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=true
	I1102 14:07:47.172211  463180 config.go:182] Loaded profile config "pause-061518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:07:47.172305  463180 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 14:07:47.201042  463180 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1102 14:07:47.201170  463180 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:07:47.290772  463180 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-02 14:07:47.280834472 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:07:47.290875  463180 docker.go:319] overlay module found
	I1102 14:07:47.293978  463180 out.go:179] * Using the docker driver based on user configuration
	I1102 14:07:47.296899  463180 start.go:309] selected driver: docker
	I1102 14:07:47.296922  463180 start.go:930] validating driver "docker" against <nil>
	I1102 14:07:47.296935  463180 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 14:07:47.297676  463180 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:07:47.409146  463180 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-02 14:07:47.393368376 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:07:47.409346  463180 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1102 14:07:47.409600  463180 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1102 14:07:47.412626  463180 out.go:179] * Using Docker driver with root privileges
	I1102 14:07:47.415617  463180 cni.go:84] Creating CNI manager for ""
	I1102 14:07:47.415700  463180 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 14:07:47.415710  463180 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1102 14:07:47.415803  463180 start.go:353] cluster config:
	{Name:force-systemd-env-263133 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-263133 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 14:07:47.420816  463180 out.go:179] * Starting "force-systemd-env-263133" primary control-plane node in "force-systemd-env-263133" cluster
	I1102 14:07:47.423778  463180 cache.go:124] Beginning downloading kic base image for docker with crio
	I1102 14:07:47.426778  463180 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1102 14:07:47.429642  463180 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 14:07:47.429699  463180 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1102 14:07:47.429711  463180 cache.go:59] Caching tarball of preloaded images
	I1102 14:07:47.429791  463180 preload.go:233] Found /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1102 14:07:47.429801  463180 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1102 14:07:47.429941  463180 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/force-systemd-env-263133/config.json ...
	I1102 14:07:47.429969  463180 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/force-systemd-env-263133/config.json: {Name:mk665b57beadc9c583869c297e9511907d2431fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:07:47.430129  463180 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1102 14:07:47.450110  463180 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1102 14:07:47.450134  463180 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1102 14:07:47.450147  463180 cache.go:233] Successfully downloaded all kic artifacts
	I1102 14:07:47.450170  463180 start.go:360] acquireMachinesLock for force-systemd-env-263133: {Name:mka653846468d0063475fc77c23aa0631aec2783 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 14:07:47.450276  463180 start.go:364] duration metric: took 86.122µs to acquireMachinesLock for "force-systemd-env-263133"
	I1102 14:07:47.450317  463180 start.go:93] Provisioning new machine with config: &{Name:force-systemd-env-263133 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-env-263133 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 14:07:47.450390  463180 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Nov 02 14:07:11 pause-061518 crio[2141]: time="2025-11-02T14:07:11.728857092Z" level=info msg="Started container" PID=2265 containerID=a078e67285aa32739fff8a60075bade751d806e4209f85785a210aa2213f3b4a description=kube-system/kube-scheduler-pause-061518/kube-scheduler id=b68286f1-3c6f-4eec-af2d-7533fa1e1ab6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5c2ee95b1113cf6501b895614f46f2825904afbe5cfc466ec41d9fae02e45950
	Nov 02 14:07:16 pause-061518 crio[2141]: time="2025-11-02T14:07:16.826739363Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.12.1" id=39031a5e-8294-434a-b166-0150eefadd02 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:07:16 pause-061518 crio[2141]: time="2025-11-02T14:07:16.828194204Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.12.1" id=0df86428-c28f-4ece-b60d-1b939dec8237 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:07:16 pause-061518 crio[2141]: time="2025-11-02T14:07:16.829441962Z" level=info msg="Creating container: kube-system/coredns-66bc5c9577-q47gx/coredns" id=93f818b5-d31b-420c-a7d5-f67fba6d21a6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:07:16 pause-061518 crio[2141]: time="2025-11-02T14:07:16.829552347Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:07:16 pause-061518 crio[2141]: time="2025-11-02T14:07:16.846708098Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:07:16 pause-061518 crio[2141]: time="2025-11-02T14:07:16.848067283Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:07:16 pause-061518 crio[2141]: time="2025-11-02T14:07:16.908189689Z" level=info msg="Created container fe648dd9a1b34b78ab6a056cc3038a77c0ee16d62036e11c6dd2b12578580d7f: kube-system/coredns-66bc5c9577-q47gx/coredns" id=93f818b5-d31b-420c-a7d5-f67fba6d21a6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:07:16 pause-061518 crio[2141]: time="2025-11-02T14:07:16.909083496Z" level=info msg="Starting container: fe648dd9a1b34b78ab6a056cc3038a77c0ee16d62036e11c6dd2b12578580d7f" id=2fb8c4d5-43dd-44d8-a7e1-82f513eec1e9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 14:07:16 pause-061518 crio[2141]: time="2025-11-02T14:07:16.91091276Z" level=info msg="Started container" PID=2522 containerID=fe648dd9a1b34b78ab6a056cc3038a77c0ee16d62036e11c6dd2b12578580d7f description=kube-system/coredns-66bc5c9577-q47gx/coredns id=2fb8c4d5-43dd-44d8-a7e1-82f513eec1e9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=500b0edc13fbbe9d988284d7c05660067d5557cfcf22cc073c450ca1c48e98cb
	Nov 02 14:07:22 pause-061518 crio[2141]: time="2025-11-02T14:07:22.258249062Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:07:22 pause-061518 crio[2141]: time="2025-11-02T14:07:22.262595289Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:07:22 pause-061518 crio[2141]: time="2025-11-02T14:07:22.262780859Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 14:07:22 pause-061518 crio[2141]: time="2025-11-02T14:07:22.262861779Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:07:22 pause-061518 crio[2141]: time="2025-11-02T14:07:22.279895206Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:07:22 pause-061518 crio[2141]: time="2025-11-02T14:07:22.280074311Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 14:07:22 pause-061518 crio[2141]: time="2025-11-02T14:07:22.280153639Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:07:22 pause-061518 crio[2141]: time="2025-11-02T14:07:22.284216301Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:07:22 pause-061518 crio[2141]: time="2025-11-02T14:07:22.284398876Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 14:07:22 pause-061518 crio[2141]: time="2025-11-02T14:07:22.284490643Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:07:22 pause-061518 crio[2141]: time="2025-11-02T14:07:22.295738001Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:07:22 pause-061518 crio[2141]: time="2025-11-02T14:07:22.295936585Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 14:07:22 pause-061518 crio[2141]: time="2025-11-02T14:07:22.296054954Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:07:22 pause-061518 crio[2141]: time="2025-11-02T14:07:22.300135938Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:07:22 pause-061518 crio[2141]: time="2025-11-02T14:07:22.300296433Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	fe648dd9a1b34       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   31 seconds ago       Running             coredns                   1                   500b0edc13fbb       coredns-66bc5c9577-q47gx               kube-system
	a078e67285aa3       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   36 seconds ago       Running             kube-scheduler            1                   5c2ee95b1113c       kube-scheduler-pause-061518            kube-system
	064259d257d84       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   36 seconds ago       Running             kube-proxy                1                   e0698ff7db202       kube-proxy-dhvp4                       kube-system
	7cc87ca2e0fb4       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   36 seconds ago       Running             kube-controller-manager   1                   4d2173fa26e88       kube-controller-manager-pause-061518   kube-system
	8714b07decff9       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   36 seconds ago       Running             etcd                      1                   4d1440f91d289       etcd-pause-061518                      kube-system
	6708daaf97b2c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   36 seconds ago       Running             kindnet-cni               1                   63dbd68b7dd3b       kindnet-gzstt                          kube-system
	10204b53afac1       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   36 seconds ago       Running             kube-apiserver            1                   7b28131227dd0       kube-apiserver-pause-061518            kube-system
	2221dd0d70440       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   49 seconds ago       Exited              coredns                   0                   500b0edc13fbb       coredns-66bc5c9577-q47gx               kube-system
	f00b63ed1dbdc       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   63dbd68b7dd3b       kindnet-gzstt                          kube-system
	159e020aa6612       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   About a minute ago   Exited              kube-proxy                0                   e0698ff7db202       kube-proxy-dhvp4                       kube-system
	ae957dadee95a       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   About a minute ago   Exited              kube-controller-manager   0                   4d2173fa26e88       kube-controller-manager-pause-061518   kube-system
	0aa8baef1b15f       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   About a minute ago   Exited              kube-apiserver            0                   7b28131227dd0       kube-apiserver-pause-061518            kube-system
	ac93be8092708       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   About a minute ago   Exited              kube-scheduler            0                   5c2ee95b1113c       kube-scheduler-pause-061518            kube-system
	94ad339adccb1       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   About a minute ago   Exited              etcd                      0                   4d1440f91d289       etcd-pause-061518                      kube-system
	
	
	==> coredns [2221dd0d7044082f120ee6769cce44bda1a305a571168c55bf2fd5d4afd5a992] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33736 - 25424 "HINFO IN 3448274736711667910.7882703770854991141. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.044022878s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [fe648dd9a1b34b78ab6a056cc3038a77c0ee16d62036e11c6dd2b12578580d7f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41548 - 46122 "HINFO IN 4347422264761728212.3955537798875286010. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.017462627s
	
	
	==> describe nodes <==
	Name:               pause-061518
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-061518
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a
	                    minikube.k8s.io/name=pause-061518
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_02T14_06_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 02 Nov 2025 14:06:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-061518
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 02 Nov 2025 14:07:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 02 Nov 2025 14:07:40 +0000   Sun, 02 Nov 2025 14:06:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 02 Nov 2025 14:07:40 +0000   Sun, 02 Nov 2025 14:06:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 02 Nov 2025 14:07:40 +0000   Sun, 02 Nov 2025 14:06:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 02 Nov 2025 14:07:40 +0000   Sun, 02 Nov 2025 14:07:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-061518
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                4107c713-1cff-4c43-b6da-f922bb5196d7
	  Boot ID:                    4c303cf4-c0b6-4c0d-aa1a-d7509feb15e7
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-q47gx                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     92s
	  kube-system                 etcd-pause-061518                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         97s
	  kube-system                 kindnet-gzstt                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      92s
	  kube-system                 kube-apiserver-pause-061518             250m (12%)    0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-controller-manager-pause-061518    200m (10%)    0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-proxy-dhvp4                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-scheduler-pause-061518             100m (5%)     0 (0%)      0 (0%)           0 (0%)         97s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age               From             Message
	  ----     ------                   ----              ----             -------
	  Normal   Starting                 90s               kube-proxy       
	  Normal   Starting                 27s               kube-proxy       
	  Normal   Starting                 98s               kubelet          Starting kubelet.
	  Warning  CgroupV1                 98s               kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  97s               kubelet          Node pause-061518 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    97s               kubelet          Node pause-061518 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     97s               kubelet          Node pause-061518 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           93s               node-controller  Node pause-061518 event: Registered Node pause-061518 in Controller
	  Warning  ContainerGCFailed        38s               kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeNotReady             37s               kubelet          Node pause-061518 status is now: NodeNotReady
	  Normal   RegisteredNode           25s               node-controller  Node pause-061518 event: Registered Node pause-061518 in Controller
	  Normal   NodeReady                8s (x2 over 51s)  kubelet          Node pause-061518 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 2 13:41] overlayfs: idmapped layers are currently not supported
	[  +2.857048] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:42] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:43] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:45] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:49] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:50] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:51] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:52] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:54] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:55] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:56] overlayfs: idmapped layers are currently not supported
	[  +3.515963] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:57] overlayfs: idmapped layers are currently not supported
	[ +24.836033] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:58] overlayfs: idmapped layers are currently not supported
	[ +23.362553] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:59] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:01] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:02] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:03] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:04] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:06] overlayfs: idmapped layers are currently not supported
	[ +50.469589] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 2 14:07] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [8714b07decff942fd4025ee48c0276ab6b5dc336243d3cda21750a2ecb2ce226] <==
	{"level":"warn","ts":"2025-11-02T14:07:17.061723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:17.094501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:17.115929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:17.138868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:17.187578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:17.214160Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:17.256092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:17.326953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:17.365194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:17.422854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:17.469974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:17.519035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:17.549502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:17.591564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:17.647793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:17.694840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:17.727119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:17.786814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:17.882670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:17.922732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:17.946862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:18.001164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:18.069356Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:18.083858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:07:18.235607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46144","server-name":"","error":"EOF"}
	
	
	==> etcd [94ad339adccb1ec21310445baaf3d9ebec13d5dc8e46a56b760732d96605f7b4] <==
	{"level":"warn","ts":"2025-11-02T14:06:05.928867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:06:05.957625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:06:05.993296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:06:06.038726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:06:06.049885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:06:06.078754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:06:06.239180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35936","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-02T14:07:03.084499Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-02T14:07:03.084569Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-061518","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-11-02T14:07:03.084668Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-02T14:07:03.230318Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-02T14:07:03.230413Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-02T14:07:03.230470Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-11-02T14:07:03.230579Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-02T14:07:03.230599Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-02T14:07:03.230662Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-02T14:07:03.230719Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-02T14:07:03.230756Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-02T14:07:03.230830Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-02T14:07:03.230851Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-02T14:07:03.230858Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-02T14:07:03.233870Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-11-02T14:07:03.233947Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-02T14:07:03.233978Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-02T14:07:03.233986Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-061518","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 14:07:48 up  2:50,  0 user,  load average: 4.28, 3.55, 2.68
	Linux pause-061518 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6708daaf97b2c00529bb1eb7cfa613199fd22e3f16eb6ffd5b598a9ef7022242] <==
	I1102 14:07:11.925961       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1102 14:07:11.968897       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1102 14:07:11.969077       1 main.go:148] setting mtu 1500 for CNI 
	I1102 14:07:11.969091       1 main.go:178] kindnetd IP family: "ipv4"
	I1102 14:07:11.969106       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-02T14:07:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1102 14:07:12.277825       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1102 14:07:12.280789       1 controller.go:381] "Waiting for informer caches to sync"
	I1102 14:07:12.280888       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1102 14:07:12.281119       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1102 14:07:19.811291       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1102 14:07:19.811387       1 metrics.go:72] Registering metrics
	I1102 14:07:19.811477       1 controller.go:711] "Syncing nftables rules"
	I1102 14:07:22.257690       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1102 14:07:22.257845       1 main.go:301] handling current node
	I1102 14:07:32.257804       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1102 14:07:32.257862       1 main.go:301] handling current node
	I1102 14:07:42.257992       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1102 14:07:42.258050       1 main.go:301] handling current node
	
	
	==> kindnet [f00b63ed1dbdc0c8a62dd0a1428a61db16efb887c8bdd767d48001c525792bfb] <==
	I1102 14:06:17.217683       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1102 14:06:17.223123       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1102 14:06:17.223266       1 main.go:148] setting mtu 1500 for CNI 
	I1102 14:06:17.223279       1 main.go:178] kindnetd IP family: "ipv4"
	I1102 14:06:17.223291       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-02T14:06:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1102 14:06:17.423167       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1102 14:06:17.423188       1 controller.go:381] "Waiting for informer caches to sync"
	I1102 14:06:17.423196       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1102 14:06:17.423522       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1102 14:06:47.420556       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1102 14:06:47.424137       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1102 14:06:47.424319       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1102 14:06:47.424452       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1102 14:06:48.723372       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1102 14:06:48.723410       1 metrics.go:72] Registering metrics
	I1102 14:06:48.723536       1 controller.go:711] "Syncing nftables rules"
	I1102 14:06:57.427809       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1102 14:06:57.427865       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0aa8baef1b15fd455bdb2e283af175835406637d0cb8294c8d464a302dae79e3] <==
	W1102 14:07:03.105603       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.110381       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.110538       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.110686       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.110798       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.110854       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.110920       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.110999       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.111074       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.111147       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.111216       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.111283       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.111352       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.111433       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.111512       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.111557       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.111600       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.111697       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.111802       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.111856       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.111945       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.111951       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.112022       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.111293       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1102 14:07:03.112099       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [10204b53afac1f042e786f38e2e04d4a30adcfb8105276e3b34d58ff11271c3e] <==
	I1102 14:07:19.743972       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1102 14:07:19.755707       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1102 14:07:19.757328       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1102 14:07:19.757355       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1102 14:07:19.757484       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1102 14:07:19.758457       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1102 14:07:19.758573       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1102 14:07:19.758647       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1102 14:07:19.778042       1 aggregator.go:171] initial CRD sync complete...
	I1102 14:07:19.778076       1 autoregister_controller.go:144] Starting autoregister controller
	I1102 14:07:19.778083       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1102 14:07:19.778091       1 cache.go:39] Caches are synced for autoregister controller
	I1102 14:07:19.801084       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1102 14:07:19.801324       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1102 14:07:19.801408       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1102 14:07:19.801664       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1102 14:07:19.832200       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	E1102 14:07:19.836102       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1102 14:07:20.403021       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1102 14:07:21.769292       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1102 14:07:23.122268       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1102 14:07:23.241941       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1102 14:07:23.455639       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1102 14:07:23.510000       1 controller.go:667] quota admission added evaluator for: endpoints
	I1102 14:07:23.555154       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	
	
	==> kube-controller-manager [7cc87ca2e0fb4201499f6723b2128c59c8024eb2339a3950d288951bec336aee] <==
	I1102 14:07:23.091713       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1102 14:07:23.091766       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1102 14:07:23.091795       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1102 14:07:23.091013       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1102 14:07:23.098029       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1102 14:07:23.099114       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1102 14:07:23.110343       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 14:07:23.111692       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1102 14:07:23.127650       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1102 14:07:23.128002       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1102 14:07:23.128145       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-061518"
	I1102 14:07:23.128223       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1102 14:07:23.129702       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1102 14:07:23.135914       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1102 14:07:23.136032       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1102 14:07:23.159199       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1102 14:07:23.159555       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1102 14:07:23.159627       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1102 14:07:23.166015       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 14:07:23.166307       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 14:07:23.166325       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1102 14:07:23.166331       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1102 14:07:23.166704       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1102 14:07:23.167728       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1102 14:07:43.131207       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [ae957dadee95a103ab6c85ebdb01b8e6adb428cc4da4125285b154f08da38d8f] <==
	I1102 14:06:15.377783       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1102 14:06:15.379132       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1102 14:06:15.327369       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1102 14:06:15.330809       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1102 14:06:15.330824       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1102 14:06:15.385684       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1102 14:06:15.385970       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-061518"
	I1102 14:06:15.386011       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1102 14:06:15.344619       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1102 14:06:15.386178       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1102 14:06:15.346504       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1102 14:06:15.386494       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1102 14:06:15.387005       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1102 14:06:15.391329       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 14:06:15.393052       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1102 14:06:15.399850       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1102 14:06:15.414080       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1102 14:06:15.424606       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 14:06:15.427773       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-061518" podCIDRs=["10.244.0.0/24"]
	I1102 14:06:15.482365       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1102 14:06:15.484288       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 14:06:15.514755       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 14:06:15.514787       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1102 14:06:15.514796       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1102 14:07:00.392441       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [064259d257d843a7a5ebd3f8f9e506ad1e47483ed8dba25182d5593f8f683514] <==
	I1102 14:07:20.011035       1 server_linux.go:53] "Using iptables proxy"
	I1102 14:07:20.189882       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1102 14:07:20.290038       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1102 14:07:20.290081       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1102 14:07:20.290147       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1102 14:07:20.451058       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 14:07:20.451193       1 server_linux.go:132] "Using iptables Proxier"
	I1102 14:07:20.476428       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1102 14:07:20.476815       1 server.go:527] "Version info" version="v1.34.1"
	I1102 14:07:20.477074       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 14:07:20.484651       1 config.go:200] "Starting service config controller"
	I1102 14:07:20.490510       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1102 14:07:20.490657       1 config.go:106] "Starting endpoint slice config controller"
	I1102 14:07:20.490704       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1102 14:07:20.490743       1 config.go:403] "Starting serviceCIDR config controller"
	I1102 14:07:20.490785       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1102 14:07:20.491570       1 config.go:309] "Starting node config controller"
	I1102 14:07:20.491629       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1102 14:07:20.491659       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1102 14:07:20.591376       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1102 14:07:20.591467       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1102 14:07:20.591485       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [159e020aa661268214cc95c834348d2ec07c1d4118e8376af4a3980a9b57efa7] <==
	I1102 14:06:17.204615       1 server_linux.go:53] "Using iptables proxy"
	I1102 14:06:17.309402       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1102 14:06:17.429211       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1102 14:06:17.429249       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1102 14:06:17.429322       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1102 14:06:17.577234       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 14:06:17.577368       1 server_linux.go:132] "Using iptables Proxier"
	I1102 14:06:17.614466       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1102 14:06:17.614893       1 server.go:527] "Version info" version="v1.34.1"
	I1102 14:06:17.615108       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 14:06:17.616510       1 config.go:200] "Starting service config controller"
	I1102 14:06:17.616582       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1102 14:06:17.616625       1 config.go:106] "Starting endpoint slice config controller"
	I1102 14:06:17.616686       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1102 14:06:17.616723       1 config.go:403] "Starting serviceCIDR config controller"
	I1102 14:06:17.616766       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1102 14:06:17.638011       1 config.go:309] "Starting node config controller"
	I1102 14:06:17.638098       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1102 14:06:17.638107       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1102 14:06:17.730256       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1102 14:06:17.730325       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1102 14:06:17.719372       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [a078e67285aa32739fff8a60075bade751d806e4209f85785a210aa2213f3b4a] <==
	I1102 14:07:18.073880       1 serving.go:386] Generated self-signed cert in-memory
	I1102 14:07:19.886302       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1102 14:07:19.886339       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 14:07:19.915067       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1102 14:07:19.915177       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1102 14:07:19.915212       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1102 14:07:19.915242       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1102 14:07:19.922933       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1102 14:07:19.922954       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1102 14:07:19.924020       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 14:07:19.924049       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 14:07:20.019085       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1102 14:07:20.024282       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 14:07:20.024376       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kube-scheduler [ac93be8092708e4c95ac3e29db96792147c96910062a80a9d35220b9cd92a3bb] <==
	E1102 14:06:08.827379       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1102 14:06:08.827426       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1102 14:06:08.827472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1102 14:06:08.827518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1102 14:06:08.827557       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1102 14:06:08.827599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1102 14:06:08.827693       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1102 14:06:08.827736       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1102 14:06:08.827879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1102 14:06:08.827924       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1102 14:06:08.827976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1102 14:06:08.828020       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1102 14:06:08.828065       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1102 14:06:08.828264       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1102 14:06:08.800617       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1102 14:06:08.836624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1102 14:06:08.836819       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1102 14:06:08.842731       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1102 14:06:10.310729       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 14:07:03.082326       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1102 14:07:03.082425       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1102 14:07:03.082437       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1102 14:07:03.082461       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 14:07:03.082549       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1102 14:07:03.082564       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 02 14:07:11 pause-061518 kubelet[1353]: E1102 14:07:11.329001    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-gzstt\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="9f4ffddb-2c0f-41a8-a925-a9e6fccedf09" pod="kube-system/kindnet-gzstt"
	Nov 02 14:07:11 pause-061518 kubelet[1353]: E1102 14:07:11.329189    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-q47gx\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="575902e5-17fa-4d63-9aed-6ea6c29955fa" pod="kube-system/coredns-66bc5c9577-q47gx"
	Nov 02 14:07:11 pause-061518 kubelet[1353]: I1102 14:07:11.368209    1353 scope.go:117] "RemoveContainer" containerID="ae957dadee95a103ab6c85ebdb01b8e6adb428cc4da4125285b154f08da38d8f"
	Nov 02 14:07:11 pause-061518 kubelet[1353]: E1102 14:07:11.368864    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-061518\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="942466de5a9d735234539dbd8eaf0cd1" pod="kube-system/etcd-pause-061518"
	Nov 02 14:07:11 pause-061518 kubelet[1353]: E1102 14:07:11.369103    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-061518\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="6322eebbe2e85d483168aa95bb946270" pod="kube-system/kube-apiserver-pause-061518"
	Nov 02 14:07:11 pause-061518 kubelet[1353]: E1102 14:07:11.369292    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-061518\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="29904a253215adf7d55276c689cee701" pod="kube-system/kube-controller-manager-pause-061518"
	Nov 02 14:07:11 pause-061518 kubelet[1353]: E1102 14:07:11.369499    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dhvp4\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a748cacd-1a6c-44da-b8a3-cf76af722681" pod="kube-system/kube-proxy-dhvp4"
	Nov 02 14:07:11 pause-061518 kubelet[1353]: E1102 14:07:11.369651    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-gzstt\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="9f4ffddb-2c0f-41a8-a925-a9e6fccedf09" pod="kube-system/kindnet-gzstt"
	Nov 02 14:07:11 pause-061518 kubelet[1353]: E1102 14:07:11.369831    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-q47gx\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="575902e5-17fa-4d63-9aed-6ea6c29955fa" pod="kube-system/coredns-66bc5c9577-q47gx"
	Nov 02 14:07:11 pause-061518 kubelet[1353]: E1102 14:07:11.369991    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-061518\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="bcbad24fdddf8b5430da392c28184fb3" pod="kube-system/kube-scheduler-pause-061518"
	Nov 02 14:07:11 pause-061518 kubelet[1353]: I1102 14:07:11.388097    1353 scope.go:117] "RemoveContainer" containerID="ac93be8092708e4c95ac3e29db96792147c96910062a80a9d35220b9cd92a3bb"
	Nov 02 14:07:11 pause-061518 kubelet[1353]: E1102 14:07:11.388891    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dhvp4\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="a748cacd-1a6c-44da-b8a3-cf76af722681" pod="kube-system/kube-proxy-dhvp4"
	Nov 02 14:07:11 pause-061518 kubelet[1353]: E1102 14:07:11.389079    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-gzstt\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="9f4ffddb-2c0f-41a8-a925-a9e6fccedf09" pod="kube-system/kindnet-gzstt"
	Nov 02 14:07:11 pause-061518 kubelet[1353]: E1102 14:07:11.389240    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-q47gx\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="575902e5-17fa-4d63-9aed-6ea6c29955fa" pod="kube-system/coredns-66bc5c9577-q47gx"
	Nov 02 14:07:11 pause-061518 kubelet[1353]: E1102 14:07:11.389409    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-061518\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="bcbad24fdddf8b5430da392c28184fb3" pod="kube-system/kube-scheduler-pause-061518"
	Nov 02 14:07:11 pause-061518 kubelet[1353]: E1102 14:07:11.389608    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-061518\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="942466de5a9d735234539dbd8eaf0cd1" pod="kube-system/etcd-pause-061518"
	Nov 02 14:07:11 pause-061518 kubelet[1353]: E1102 14:07:11.389769    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-061518\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="6322eebbe2e85d483168aa95bb946270" pod="kube-system/kube-apiserver-pause-061518"
	Nov 02 14:07:11 pause-061518 kubelet[1353]: E1102 14:07:11.389955    1353 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-061518\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="29904a253215adf7d55276c689cee701" pod="kube-system/kube-controller-manager-pause-061518"
	Nov 02 14:07:11 pause-061518 kubelet[1353]: I1102 14:07:11.988683    1353 setters.go:543] "Node became not ready" node="pause-061518" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-02T14:07:11Z","lastTransitionTime":"2025-11-02T14:07:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized"}
	Nov 02 14:07:12 pause-061518 kubelet[1353]: E1102 14:07:12.827032    1353 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized" pod="kube-system/coredns-66bc5c9577-q47gx" podUID="575902e5-17fa-4d63-9aed-6ea6c29955fa"
	Nov 02 14:07:14 pause-061518 kubelet[1353]: E1102 14:07:14.825064    1353 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized" pod="kube-system/coredns-66bc5c9577-q47gx" podUID="575902e5-17fa-4d63-9aed-6ea6c29955fa"
	Nov 02 14:07:16 pause-061518 kubelet[1353]: I1102 14:07:16.825721    1353 scope.go:117] "RemoveContainer" containerID="2221dd0d7044082f120ee6769cce44bda1a305a571168c55bf2fd5d4afd5a992"
	Nov 02 14:07:42 pause-061518 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 02 14:07:42 pause-061518 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 02 14:07:42 pause-061518 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-061518 -n pause-061518
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-061518 -n pause-061518: exit status 2 (431.223534ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-061518 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (7.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-873713 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-873713 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (274.433705ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T14:10:21Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-873713 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-873713 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-873713 describe deploy/metrics-server -n kube-system: exit status 1 (92.477437ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-873713 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-873713
helpers_test.go:243: (dbg) docker inspect old-k8s-version-873713:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4ee7404b4a6ac58ace4f7c3005215815a212bd879029eed27a35560a4c57fe56",
	        "Created": "2025-11-02T14:09:16.892897675Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 471721,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-02T14:09:16.958270618Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/4ee7404b4a6ac58ace4f7c3005215815a212bd879029eed27a35560a4c57fe56/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4ee7404b4a6ac58ace4f7c3005215815a212bd879029eed27a35560a4c57fe56/hostname",
	        "HostsPath": "/var/lib/docker/containers/4ee7404b4a6ac58ace4f7c3005215815a212bd879029eed27a35560a4c57fe56/hosts",
	        "LogPath": "/var/lib/docker/containers/4ee7404b4a6ac58ace4f7c3005215815a212bd879029eed27a35560a4c57fe56/4ee7404b4a6ac58ace4f7c3005215815a212bd879029eed27a35560a4c57fe56-json.log",
	        "Name": "/old-k8s-version-873713",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-873713:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-873713",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4ee7404b4a6ac58ace4f7c3005215815a212bd879029eed27a35560a4c57fe56",
	                "LowerDir": "/var/lib/docker/overlay2/02fe55438eff7f4b248d251e0fd41254d206cd6322b4309218b237305b27175b-init/diff:/var/lib/docker/overlay2/53058afe6acd7391f639f551e07f446f6a39a991b699aea18507cb21a1652e97/diff",
	                "MergedDir": "/var/lib/docker/overlay2/02fe55438eff7f4b248d251e0fd41254d206cd6322b4309218b237305b27175b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/02fe55438eff7f4b248d251e0fd41254d206cd6322b4309218b237305b27175b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/02fe55438eff7f4b248d251e0fd41254d206cd6322b4309218b237305b27175b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-873713",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-873713/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-873713",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-873713",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-873713",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "51b2f2e58591555116b605c537ec1f9aa271259ce3ba95e071b9639808997596",
	            "SandboxKey": "/var/run/docker/netns/51b2f2e58591",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33421"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33422"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33425"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33423"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33424"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-873713": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "12:5d:79:35:73:21",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "174273846e47bcb425298d38a31d82d3ed621bb4662ffd28cfa6393ea0333640",
	                    "EndpointID": "a8a121d7656a803db8780a1ce54fe8633114617e79aa001f38a2ba01a36a8044",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-873713",
	                        "4ee7404b4a6a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-873713 -n old-k8s-version-873713
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-873713 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-873713 logs -n 25: (1.25200256s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-143736 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo containerd config dump                                                                                                                                                                                                  │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo crio config                                                                                                                                                                                                             │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ delete  │ -p cilium-143736                                                                                                                                                                                                                              │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │ 02 Nov 25 14:07 UTC │
	│ start   │ -p force-systemd-env-263133 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-263133 │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │ 02 Nov 25 14:08 UTC │
	│ delete  │ -p pause-061518                                                                                                                                                                                                                               │ pause-061518             │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │ 02 Nov 25 14:07 UTC │
	│ start   │ -p cert-expiration-114321 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-114321   │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │ 02 Nov 25 14:08 UTC │
	│ delete  │ -p force-systemd-env-263133                                                                                                                                                                                                                   │ force-systemd-env-263133 │ jenkins │ v1.37.0 │ 02 Nov 25 14:08 UTC │ 02 Nov 25 14:08 UTC │
	│ start   │ -p cert-options-935084 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-935084      │ jenkins │ v1.37.0 │ 02 Nov 25 14:08 UTC │ 02 Nov 25 14:09 UTC │
	│ ssh     │ cert-options-935084 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-935084      │ jenkins │ v1.37.0 │ 02 Nov 25 14:09 UTC │ 02 Nov 25 14:09 UTC │
	│ ssh     │ -p cert-options-935084 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-935084      │ jenkins │ v1.37.0 │ 02 Nov 25 14:09 UTC │ 02 Nov 25 14:09 UTC │
	│ delete  │ -p cert-options-935084                                                                                                                                                                                                                        │ cert-options-935084      │ jenkins │ v1.37.0 │ 02 Nov 25 14:09 UTC │ 02 Nov 25 14:09 UTC │
	│ start   │ -p old-k8s-version-873713 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:09 UTC │ 02 Nov 25 14:10 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-873713 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:10 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 14:09:10
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 14:09:10.151195  471330 out.go:360] Setting OutFile to fd 1 ...
	I1102 14:09:10.151371  471330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:09:10.151402  471330 out.go:374] Setting ErrFile to fd 2...
	I1102 14:09:10.151408  471330 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:09:10.151802  471330 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 14:09:10.152381  471330 out.go:368] Setting JSON to false
	I1102 14:09:10.153391  471330 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10303,"bootTime":1762082248,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1102 14:09:10.153485  471330 start.go:143] virtualization:  
	I1102 14:09:10.157735  471330 out.go:179] * [old-k8s-version-873713] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1102 14:09:10.162404  471330 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 14:09:10.162460  471330 notify.go:221] Checking for updates...
	I1102 14:09:10.168993  471330 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 14:09:10.172442  471330 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-293314/kubeconfig
	I1102 14:09:10.175836  471330 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-293314/.minikube
	I1102 14:09:10.179097  471330 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1102 14:09:10.182348  471330 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 14:09:10.186221  471330 config.go:182] Loaded profile config "cert-expiration-114321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:09:10.186340  471330 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 14:09:10.228469  471330 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1102 14:09:10.229114  471330 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:09:10.298832  471330 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-02 14:09:10.288682009 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:09:10.298946  471330 docker.go:319] overlay module found
	I1102 14:09:10.302368  471330 out.go:179] * Using the docker driver based on user configuration
	I1102 14:09:10.305475  471330 start.go:309] selected driver: docker
	I1102 14:09:10.305498  471330 start.go:930] validating driver "docker" against <nil>
	I1102 14:09:10.305513  471330 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 14:09:10.306295  471330 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:09:10.361550  471330 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-02 14:09:10.351795595 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:09:10.361710  471330 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1102 14:09:10.361944  471330 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 14:09:10.365101  471330 out.go:179] * Using Docker driver with root privileges
	I1102 14:09:10.368062  471330 cni.go:84] Creating CNI manager for ""
	I1102 14:09:10.368129  471330 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 14:09:10.368164  471330 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1102 14:09:10.368248  471330 start.go:353] cluster config:
	{Name:old-k8s-version-873713 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-873713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SS
HAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 14:09:10.371383  471330 out.go:179] * Starting "old-k8s-version-873713" primary control-plane node in "old-k8s-version-873713" cluster
	I1102 14:09:10.374299  471330 cache.go:124] Beginning downloading kic base image for docker with crio
	I1102 14:09:10.377302  471330 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1102 14:09:10.380241  471330 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1102 14:09:10.380306  471330 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1102 14:09:10.380320  471330 cache.go:59] Caching tarball of preloaded images
	I1102 14:09:10.380342  471330 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1102 14:09:10.380416  471330 preload.go:233] Found /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1102 14:09:10.380428  471330 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1102 14:09:10.380536  471330 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/config.json ...
	I1102 14:09:10.380553  471330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/config.json: {Name:mk715b57b6d64223a12c4771d99c5127104cdfcf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:09:10.401174  471330 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1102 14:09:10.401201  471330 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1102 14:09:10.401213  471330 cache.go:233] Successfully downloaded all kic artifacts
	I1102 14:09:10.401236  471330 start.go:360] acquireMachinesLock for old-k8s-version-873713: {Name:mka01d37237309eb47deb14d56336a344def0d96 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 14:09:10.401336  471330 start.go:364] duration metric: took 85.564µs to acquireMachinesLock for "old-k8s-version-873713"
	I1102 14:09:10.401360  471330 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-873713 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-873713 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQ
emuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 14:09:10.401422  471330 start.go:125] createHost starting for "" (driver="docker")
	I1102 14:09:10.404871  471330 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1102 14:09:10.405117  471330 start.go:159] libmachine.API.Create for "old-k8s-version-873713" (driver="docker")
	I1102 14:09:10.405163  471330 client.go:173] LocalClient.Create starting
	I1102 14:09:10.405240  471330 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem
	I1102 14:09:10.405276  471330 main.go:143] libmachine: Decoding PEM data...
	I1102 14:09:10.405351  471330 main.go:143] libmachine: Parsing certificate...
	I1102 14:09:10.405423  471330 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem
	I1102 14:09:10.405450  471330 main.go:143] libmachine: Decoding PEM data...
	I1102 14:09:10.405464  471330 main.go:143] libmachine: Parsing certificate...
	I1102 14:09:10.405856  471330 cli_runner.go:164] Run: docker network inspect old-k8s-version-873713 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1102 14:09:10.425569  471330 cli_runner.go:211] docker network inspect old-k8s-version-873713 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1102 14:09:10.425667  471330 network_create.go:284] running [docker network inspect old-k8s-version-873713] to gather additional debugging logs...
	I1102 14:09:10.425690  471330 cli_runner.go:164] Run: docker network inspect old-k8s-version-873713
	W1102 14:09:10.440181  471330 cli_runner.go:211] docker network inspect old-k8s-version-873713 returned with exit code 1
	I1102 14:09:10.440211  471330 network_create.go:287] error running [docker network inspect old-k8s-version-873713]: docker network inspect old-k8s-version-873713: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-873713 not found
	I1102 14:09:10.440226  471330 network_create.go:289] output of [docker network inspect old-k8s-version-873713]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-873713 not found
	
	** /stderr **
	I1102 14:09:10.440336  471330 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 14:09:10.456284  471330 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ddf319108ac9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:66:f7:2d:49:67:ff} reservation:<nil>}
	I1102 14:09:10.456640  471330 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-30b945568040 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:b6:b2:b0:cb:49:d7} reservation:<nil>}
	I1102 14:09:10.456871  471330 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d23a3a2e266d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fa:42:95:8e:ae:52} reservation:<nil>}
	I1102 14:09:10.457287  471330 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a4f630}
	I1102 14:09:10.457303  471330 network_create.go:124] attempt to create docker network old-k8s-version-873713 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1102 14:09:10.457353  471330 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-873713 old-k8s-version-873713
	I1102 14:09:10.516613  471330 network_create.go:108] docker network old-k8s-version-873713 192.168.76.0/24 created
	I1102 14:09:10.516648  471330 kic.go:121] calculated static IP "192.168.76.2" for the "old-k8s-version-873713" container
	I1102 14:09:10.516742  471330 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1102 14:09:10.533363  471330 cli_runner.go:164] Run: docker volume create old-k8s-version-873713 --label name.minikube.sigs.k8s.io=old-k8s-version-873713 --label created_by.minikube.sigs.k8s.io=true
	I1102 14:09:10.551291  471330 oci.go:103] Successfully created a docker volume old-k8s-version-873713
	I1102 14:09:10.551385  471330 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-873713-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-873713 --entrypoint /usr/bin/test -v old-k8s-version-873713:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1102 14:09:11.106810  471330 oci.go:107] Successfully prepared a docker volume old-k8s-version-873713
	I1102 14:09:11.106866  471330 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1102 14:09:11.106887  471330 kic.go:194] Starting extracting preloaded images to volume ...
	I1102 14:09:11.106977  471330 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-873713:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1102 14:09:16.809756  471330 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-873713:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.702736304s)
	I1102 14:09:16.809789  471330 kic.go:203] duration metric: took 5.702898331s to extract preloaded images to volume ...
	W1102 14:09:16.809925  471330 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1102 14:09:16.810037  471330 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1102 14:09:16.876682  471330 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-873713 --name old-k8s-version-873713 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-873713 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-873713 --network old-k8s-version-873713 --ip 192.168.76.2 --volume old-k8s-version-873713:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1102 14:09:17.205107  471330 cli_runner.go:164] Run: docker container inspect old-k8s-version-873713 --format={{.State.Running}}
	I1102 14:09:17.227695  471330 cli_runner.go:164] Run: docker container inspect old-k8s-version-873713 --format={{.State.Status}}
	I1102 14:09:17.251943  471330 cli_runner.go:164] Run: docker exec old-k8s-version-873713 stat /var/lib/dpkg/alternatives/iptables
	I1102 14:09:17.324176  471330 oci.go:144] the created container "old-k8s-version-873713" has a running status.
	I1102 14:09:17.324208  471330 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21808-293314/.minikube/machines/old-k8s-version-873713/id_rsa...
	I1102 14:09:17.621426  471330 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21808-293314/.minikube/machines/old-k8s-version-873713/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1102 14:09:17.646029  471330 cli_runner.go:164] Run: docker container inspect old-k8s-version-873713 --format={{.State.Status}}
	I1102 14:09:17.674111  471330 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1102 14:09:17.674141  471330 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-873713 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1102 14:09:17.763649  471330 cli_runner.go:164] Run: docker container inspect old-k8s-version-873713 --format={{.State.Status}}
	I1102 14:09:17.792603  471330 machine.go:94] provisionDockerMachine start ...
	I1102 14:09:17.792722  471330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-873713
	I1102 14:09:17.810326  471330 main.go:143] libmachine: Using SSH client type: native
	I1102 14:09:17.811014  471330 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33421 <nil> <nil>}
	I1102 14:09:17.811037  471330 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 14:09:17.811634  471330 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1102 14:09:20.962213  471330 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-873713
	
	I1102 14:09:20.962263  471330 ubuntu.go:182] provisioning hostname "old-k8s-version-873713"
	I1102 14:09:20.962354  471330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-873713
	I1102 14:09:20.979797  471330 main.go:143] libmachine: Using SSH client type: native
	I1102 14:09:20.980140  471330 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33421 <nil> <nil>}
	I1102 14:09:20.980157  471330 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-873713 && echo "old-k8s-version-873713" | sudo tee /etc/hostname
	I1102 14:09:21.148071  471330 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-873713
	
	I1102 14:09:21.148229  471330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-873713
	I1102 14:09:21.167263  471330 main.go:143] libmachine: Using SSH client type: native
	I1102 14:09:21.167594  471330 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33421 <nil> <nil>}
	I1102 14:09:21.167619  471330 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-873713' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-873713/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-873713' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 14:09:21.319078  471330 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 14:09:21.319103  471330 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-293314/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-293314/.minikube}
	I1102 14:09:21.319132  471330 ubuntu.go:190] setting up certificates
	I1102 14:09:21.319157  471330 provision.go:84] configureAuth start
	I1102 14:09:21.319214  471330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-873713
	I1102 14:09:21.338876  471330 provision.go:143] copyHostCerts
	I1102 14:09:21.338945  471330 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem, removing ...
	I1102 14:09:21.338954  471330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem
	I1102 14:09:21.339032  471330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem (1082 bytes)
	I1102 14:09:21.339133  471330 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem, removing ...
	I1102 14:09:21.339138  471330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem
	I1102 14:09:21.339163  471330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem (1123 bytes)
	I1102 14:09:21.339222  471330 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem, removing ...
	I1102 14:09:21.339227  471330 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem
	I1102 14:09:21.339250  471330 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem (1675 bytes)
	I1102 14:09:21.339303  471330 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-873713 san=[127.0.0.1 192.168.76.2 localhost minikube old-k8s-version-873713]
	I1102 14:09:22.384391  471330 provision.go:177] copyRemoteCerts
	I1102 14:09:22.384481  471330 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 14:09:22.384552  471330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-873713
	I1102 14:09:22.402878  471330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33421 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/old-k8s-version-873713/id_rsa Username:docker}
	I1102 14:09:22.508230  471330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1102 14:09:22.535425  471330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1102 14:09:22.563436  471330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1102 14:09:22.586038  471330 provision.go:87] duration metric: took 1.266855876s to configureAuth
	I1102 14:09:22.586067  471330 ubuntu.go:206] setting minikube options for container-runtime
	I1102 14:09:22.586251  471330 config.go:182] Loaded profile config "old-k8s-version-873713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1102 14:09:22.586370  471330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-873713
	I1102 14:09:22.604715  471330 main.go:143] libmachine: Using SSH client type: native
	I1102 14:09:22.605136  471330 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33421 <nil> <nil>}
	I1102 14:09:22.605158  471330 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1102 14:09:22.867697  471330 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1102 14:09:22.867781  471330 machine.go:97] duration metric: took 5.075149668s to provisionDockerMachine
	I1102 14:09:22.867814  471330 client.go:176] duration metric: took 12.462631852s to LocalClient.Create
	I1102 14:09:22.867864  471330 start.go:167] duration metric: took 12.462748308s to libmachine.API.Create "old-k8s-version-873713"
	I1102 14:09:22.867889  471330 start.go:293] postStartSetup for "old-k8s-version-873713" (driver="docker")
	I1102 14:09:22.867931  471330 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1102 14:09:22.868019  471330 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1102 14:09:22.868086  471330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-873713
	I1102 14:09:22.884995  471330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33421 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/old-k8s-version-873713/id_rsa Username:docker}
	I1102 14:09:22.990568  471330 ssh_runner.go:195] Run: cat /etc/os-release
	I1102 14:09:22.994208  471330 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1102 14:09:22.994237  471330 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1102 14:09:22.994249  471330 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-293314/.minikube/addons for local assets ...
	I1102 14:09:22.994304  471330 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-293314/.minikube/files for local assets ...
	I1102 14:09:22.994392  471330 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem -> 2951742.pem in /etc/ssl/certs
	I1102 14:09:22.994496  471330 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1102 14:09:23.002190  471330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem --> /etc/ssl/certs/2951742.pem (1708 bytes)
	I1102 14:09:23.023879  471330 start.go:296] duration metric: took 155.944682ms for postStartSetup
	I1102 14:09:23.024303  471330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-873713
	I1102 14:09:23.042797  471330 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/config.json ...
	I1102 14:09:23.043161  471330 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 14:09:23.043715  471330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-873713
	I1102 14:09:23.064430  471330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33421 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/old-k8s-version-873713/id_rsa Username:docker}
	I1102 14:09:23.167827  471330 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1102 14:09:23.172722  471330 start.go:128] duration metric: took 12.771285174s to createHost
	I1102 14:09:23.172746  471330 start.go:83] releasing machines lock for "old-k8s-version-873713", held for 12.771401793s
	I1102 14:09:23.172818  471330 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-873713
	I1102 14:09:23.190048  471330 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem (1338 bytes)
	W1102 14:09:23.190126  471330 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174_empty.pem, impossibly tiny 0 bytes
	I1102 14:09:23.190141  471330 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem (1679 bytes)
	I1102 14:09:23.190168  471330 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem (1082 bytes)
	I1102 14:09:23.190197  471330 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem (1123 bytes)
	I1102 14:09:23.190222  471330 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem (1675 bytes)
	I1102 14:09:23.190270  471330 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem (1708 bytes)
	I1102 14:09:23.190343  471330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 14:09:23.190404  471330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-873713
	I1102 14:09:23.214525  471330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33421 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/old-k8s-version-873713/id_rsa Username:docker}
	I1102 14:09:23.328751  471330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem --> /usr/share/ca-certificates/295174.pem (1338 bytes)
	I1102 14:09:23.347098  471330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem --> /usr/share/ca-certificates/2951742.pem (1708 bytes)
	I1102 14:09:23.366845  471330 ssh_runner.go:195] Run: openssl version
	I1102 14:09:23.373599  471330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295174.pem && ln -fs /usr/share/ca-certificates/295174.pem /etc/ssl/certs/295174.pem"
	I1102 14:09:23.381888  471330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295174.pem
	I1102 14:09:23.385842  471330 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 13:20 /usr/share/ca-certificates/295174.pem
	I1102 14:09:23.385955  471330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295174.pem
	I1102 14:09:23.427657  471330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295174.pem /etc/ssl/certs/51391683.0"
	I1102 14:09:23.441998  471330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951742.pem && ln -fs /usr/share/ca-certificates/2951742.pem /etc/ssl/certs/2951742.pem"
	I1102 14:09:23.450771  471330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951742.pem
	I1102 14:09:23.454570  471330 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 13:20 /usr/share/ca-certificates/2951742.pem
	I1102 14:09:23.454675  471330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951742.pem
	I1102 14:09:23.496834  471330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951742.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 14:09:23.515355  471330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 14:09:23.524189  471330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:09:23.528476  471330 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 13:13 /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:09:23.528557  471330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:09:23.572318  471330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 14:09:23.580820  471330 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1102 14:09:23.584439  471330 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1102 14:09:23.588168  471330 ssh_runner.go:195] Run: cat /version.json
	I1102 14:09:23.588250  471330 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1102 14:09:23.592752  471330 ssh_runner.go:195] Run: systemctl --version
	I1102 14:09:23.682154  471330 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1102 14:09:23.718583  471330 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1102 14:09:23.723010  471330 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1102 14:09:23.723145  471330 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1102 14:09:23.753415  471330 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1102 14:09:23.753442  471330 start.go:496] detecting cgroup driver to use...
	I1102 14:09:23.753504  471330 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1102 14:09:23.753569  471330 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1102 14:09:23.772035  471330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1102 14:09:23.785523  471330 docker.go:218] disabling cri-docker service (if available) ...
	I1102 14:09:23.785624  471330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1102 14:09:23.805016  471330 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1102 14:09:23.824964  471330 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1102 14:09:23.959253  471330 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1102 14:09:24.092659  471330 docker.go:234] disabling docker service ...
	I1102 14:09:24.092769  471330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1102 14:09:24.115037  471330 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1102 14:09:24.128831  471330 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1102 14:09:24.249396  471330 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1102 14:09:24.382705  471330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1102 14:09:24.396273  471330 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1102 14:09:24.412171  471330 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1102 14:09:24.412281  471330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:09:24.421593  471330 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1102 14:09:24.421700  471330 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:09:24.431320  471330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:09:24.443740  471330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:09:24.452801  471330 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1102 14:09:24.461325  471330 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:09:24.469936  471330 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:09:24.483520  471330 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:09:24.492460  471330 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1102 14:09:24.499926  471330 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1102 14:09:24.507500  471330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 14:09:24.624921  471330 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1102 14:09:24.758649  471330 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1102 14:09:24.758742  471330 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1102 14:09:24.762735  471330 start.go:564] Will wait 60s for crictl version
	I1102 14:09:24.762850  471330 ssh_runner.go:195] Run: which crictl
	I1102 14:09:24.766449  471330 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1102 14:09:24.790333  471330 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1102 14:09:24.790456  471330 ssh_runner.go:195] Run: crio --version
	I1102 14:09:24.819168  471330 ssh_runner.go:195] Run: crio --version
	I1102 14:09:24.854476  471330 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1102 14:09:24.857367  471330 cli_runner.go:164] Run: docker network inspect old-k8s-version-873713 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 14:09:24.873405  471330 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1102 14:09:24.877234  471330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 14:09:24.887546  471330 kubeadm.go:884] updating cluster {Name:old-k8s-version-873713 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-873713 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1102 14:09:24.887670  471330 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1102 14:09:24.887734  471330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 14:09:24.919271  471330 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 14:09:24.919299  471330 crio.go:433] Images already preloaded, skipping extraction
	I1102 14:09:24.919357  471330 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 14:09:24.950266  471330 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 14:09:24.950288  471330 cache_images.go:86] Images are preloaded, skipping loading
	I1102 14:09:24.950297  471330 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.28.0 crio true true} ...
	I1102 14:09:24.950383  471330 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-873713 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-873713 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1102 14:09:24.950459  471330 ssh_runner.go:195] Run: crio config
	I1102 14:09:25.020471  471330 cni.go:84] Creating CNI manager for ""
	I1102 14:09:25.020502  471330 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 14:09:25.020518  471330 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1102 14:09:25.020588  471330 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-873713 NodeName:old-k8s-version-873713 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1102 14:09:25.020830  471330 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-873713"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1102 14:09:25.020958  471330 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1102 14:09:25.030250  471330 binaries.go:44] Found k8s binaries, skipping transfer
	I1102 14:09:25.030339  471330 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1102 14:09:25.038740  471330 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1102 14:09:25.052542  471330 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1102 14:09:25.066130  471330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2160 bytes)
	I1102 14:09:25.085599  471330 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1102 14:09:25.089614  471330 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 14:09:25.100869  471330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 14:09:25.219797  471330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 14:09:25.240360  471330 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713 for IP: 192.168.76.2
	I1102 14:09:25.240429  471330 certs.go:195] generating shared ca certs ...
	I1102 14:09:25.240480  471330 certs.go:227] acquiring lock for ca certs: {Name:mkead50075949a3cdc798f9c0149a2bc2638cbbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:09:25.240687  471330 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-293314/.minikube/ca.key
	I1102 14:09:25.240771  471330 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.key
	I1102 14:09:25.240799  471330 certs.go:257] generating profile certs ...
	I1102 14:09:25.240888  471330 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/client.key
	I1102 14:09:25.240939  471330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/client.crt with IP's: []
	I1102 14:09:26.097917  471330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/client.crt ...
	I1102 14:09:26.097949  471330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/client.crt: {Name:mkaaa2bcfd4066b46a74a1be58beffe2cae7c4a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:09:26.098185  471330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/client.key ...
	I1102 14:09:26.098203  471330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/client.key: {Name:mkebfd09c914f49f56d4aa3d1d5a89625bcb0f12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:09:26.098307  471330 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/apiserver.key.766c2d0f
	I1102 14:09:26.098329  471330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/apiserver.crt.766c2d0f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1102 14:09:26.443271  471330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/apiserver.crt.766c2d0f ...
	I1102 14:09:26.443301  471330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/apiserver.crt.766c2d0f: {Name:mk3fd6b02b46122a00aeba66e3b998e0f0def4b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:09:26.443526  471330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/apiserver.key.766c2d0f ...
	I1102 14:09:26.443562  471330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/apiserver.key.766c2d0f: {Name:mk47c334937b94b8d4fc464966947c79886af236 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:09:26.443694  471330 certs.go:382] copying /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/apiserver.crt.766c2d0f -> /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/apiserver.crt
	I1102 14:09:26.443801  471330 certs.go:386] copying /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/apiserver.key.766c2d0f -> /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/apiserver.key
	I1102 14:09:26.443915  471330 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/proxy-client.key
	I1102 14:09:26.443936  471330 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/proxy-client.crt with IP's: []
	I1102 14:09:26.681547  471330 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/proxy-client.crt ...
	I1102 14:09:26.681580  471330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/proxy-client.crt: {Name:mkeec76535714e36b6e112b1457dba7f1d188056 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:09:26.681773  471330 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/proxy-client.key ...
	I1102 14:09:26.681788  471330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/proxy-client.key: {Name:mk83c518f36d1769bf79c6f8f8ebbfea1f71886e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:09:26.681983  471330 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem (1338 bytes)
	W1102 14:09:26.682027  471330 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174_empty.pem, impossibly tiny 0 bytes
	I1102 14:09:26.682037  471330 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem (1679 bytes)
	I1102 14:09:26.682059  471330 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem (1082 bytes)
	I1102 14:09:26.682080  471330 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem (1123 bytes)
	I1102 14:09:26.682104  471330 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem (1675 bytes)
	I1102 14:09:26.682154  471330 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem (1708 bytes)
	I1102 14:09:26.682850  471330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1102 14:09:26.702669  471330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1102 14:09:26.721726  471330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1102 14:09:26.741084  471330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1102 14:09:26.762696  471330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1102 14:09:26.783058  471330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1102 14:09:26.801192  471330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1102 14:09:26.824566  471330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1102 14:09:26.845546  471330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem --> /usr/share/ca-certificates/2951742.pem (1708 bytes)
	I1102 14:09:26.865648  471330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 14:09:26.884769  471330 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem --> /usr/share/ca-certificates/295174.pem (1338 bytes)
	I1102 14:09:26.905825  471330 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1102 14:09:26.928943  471330 ssh_runner.go:195] Run: openssl version
	I1102 14:09:26.936388  471330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 14:09:26.946742  471330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:09:26.951206  471330 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 13:13 /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:09:26.951298  471330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:09:27.003671  471330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 14:09:27.013487  471330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295174.pem && ln -fs /usr/share/ca-certificates/295174.pem /etc/ssl/certs/295174.pem"
	I1102 14:09:27.023227  471330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295174.pem
	I1102 14:09:27.028090  471330 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 13:20 /usr/share/ca-certificates/295174.pem
	I1102 14:09:27.028166  471330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295174.pem
	I1102 14:09:27.071311  471330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295174.pem /etc/ssl/certs/51391683.0"
	I1102 14:09:27.087857  471330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951742.pem && ln -fs /usr/share/ca-certificates/2951742.pem /etc/ssl/certs/2951742.pem"
	I1102 14:09:27.096838  471330 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951742.pem
	I1102 14:09:27.101431  471330 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 13:20 /usr/share/ca-certificates/2951742.pem
	I1102 14:09:27.101495  471330 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951742.pem
	I1102 14:09:27.146774  471330 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951742.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 14:09:27.154957  471330 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1102 14:09:27.158665  471330 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1102 14:09:27.158724  471330 kubeadm.go:401] StartCluster: {Name:old-k8s-version-873713 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-873713 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 14:09:27.158814  471330 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 14:09:27.158877  471330 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 14:09:27.187494  471330 cri.go:89] found id: ""
	I1102 14:09:27.187605  471330 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1102 14:09:27.196773  471330 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1102 14:09:27.205224  471330 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1102 14:09:27.205294  471330 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1102 14:09:27.213822  471330 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1102 14:09:27.213868  471330 kubeadm.go:158] found existing configuration files:
	
	I1102 14:09:27.213917  471330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1102 14:09:27.222270  471330 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1102 14:09:27.222344  471330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1102 14:09:27.230552  471330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1102 14:09:27.238502  471330 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1102 14:09:27.238584  471330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1102 14:09:27.246218  471330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1102 14:09:27.254695  471330 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1102 14:09:27.254866  471330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1102 14:09:27.263655  471330 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1102 14:09:27.272308  471330 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1102 14:09:27.272409  471330 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1102 14:09:27.280286  471330 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1102 14:09:27.332673  471330 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1102 14:09:27.332926  471330 kubeadm.go:319] [preflight] Running pre-flight checks
	I1102 14:09:27.375743  471330 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1102 14:09:27.375825  471330 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1102 14:09:27.375911  471330 kubeadm.go:319] OS: Linux
	I1102 14:09:27.375962  471330 kubeadm.go:319] CGROUPS_CPU: enabled
	I1102 14:09:27.376014  471330 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1102 14:09:27.376066  471330 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1102 14:09:27.376118  471330 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1102 14:09:27.376170  471330 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1102 14:09:27.376223  471330 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1102 14:09:27.376287  471330 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1102 14:09:27.376340  471330 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1102 14:09:27.376390  471330 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1102 14:09:27.476653  471330 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1102 14:09:27.476772  471330 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1102 14:09:27.476877  471330 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1102 14:09:27.640955  471330 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1102 14:09:27.644992  471330 out.go:252]   - Generating certificates and keys ...
	I1102 14:09:27.645153  471330 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1102 14:09:27.645269  471330 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1102 14:09:28.501478  471330 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1102 14:09:29.036213  471330 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1102 14:09:29.357242  471330 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1102 14:09:29.648849  471330 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1102 14:09:29.859577  471330 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1102 14:09:29.859912  471330 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-873713] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1102 14:09:30.428127  471330 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1102 14:09:30.428595  471330 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-873713] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1102 14:09:30.742695  471330 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1102 14:09:31.307525  471330 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1102 14:09:31.518003  471330 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1102 14:09:31.518353  471330 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1102 14:09:32.062166  471330 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1102 14:09:32.334144  471330 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1102 14:09:32.981055  471330 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1102 14:09:33.483560  471330 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1102 14:09:33.484529  471330 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1102 14:09:33.488151  471330 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1102 14:09:33.492275  471330 out.go:252]   - Booting up control plane ...
	I1102 14:09:33.492421  471330 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1102 14:09:33.492507  471330 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1102 14:09:33.493661  471330 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1102 14:09:33.514038  471330 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1102 14:09:33.514143  471330 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1102 14:09:33.514192  471330 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1102 14:09:33.668961  471330 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1102 14:09:41.167826  471330 kubeadm.go:319] [apiclient] All control plane components are healthy after 7.502323 seconds
	I1102 14:09:41.167948  471330 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1102 14:09:41.186129  471330 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1102 14:09:41.720129  471330 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1102 14:09:41.720338  471330 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-873713 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1102 14:09:42.238927  471330 kubeadm.go:319] [bootstrap-token] Using token: itozj3.5jttqruly3u2d4tx
	I1102 14:09:42.241980  471330 out.go:252]   - Configuring RBAC rules ...
	I1102 14:09:42.242118  471330 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1102 14:09:42.248428  471330 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1102 14:09:42.262377  471330 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1102 14:09:42.271559  471330 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1102 14:09:42.276517  471330 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1102 14:09:42.281015  471330 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1102 14:09:42.298881  471330 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1102 14:09:42.622087  471330 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1102 14:09:42.682321  471330 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1102 14:09:42.683469  471330 kubeadm.go:319] 
	I1102 14:09:42.683539  471330 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1102 14:09:42.683545  471330 kubeadm.go:319] 
	I1102 14:09:42.683622  471330 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1102 14:09:42.683626  471330 kubeadm.go:319] 
	I1102 14:09:42.683651  471330 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1102 14:09:42.683709  471330 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1102 14:09:42.683759  471330 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1102 14:09:42.683763  471330 kubeadm.go:319] 
	I1102 14:09:42.683816  471330 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1102 14:09:42.683821  471330 kubeadm.go:319] 
	I1102 14:09:42.683868  471330 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1102 14:09:42.683872  471330 kubeadm.go:319] 
	I1102 14:09:42.683923  471330 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1102 14:09:42.683997  471330 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1102 14:09:42.684064  471330 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1102 14:09:42.684069  471330 kubeadm.go:319] 
	I1102 14:09:42.684152  471330 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1102 14:09:42.684228  471330 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1102 14:09:42.684232  471330 kubeadm.go:319] 
	I1102 14:09:42.684314  471330 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token itozj3.5jttqruly3u2d4tx \
	I1102 14:09:42.684430  471330 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:bd4a1f3bddc85f3fc83315ad33165a30aa1cba7ce55898ef9dad8dcc7e8d0eec \
	I1102 14:09:42.684721  471330 kubeadm.go:319] 	--control-plane 
	I1102 14:09:42.684748  471330 kubeadm.go:319] 
	I1102 14:09:42.684870  471330 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1102 14:09:42.684881  471330 kubeadm.go:319] 
	I1102 14:09:42.684968  471330 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token itozj3.5jttqruly3u2d4tx \
	I1102 14:09:42.685081  471330 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:bd4a1f3bddc85f3fc83315ad33165a30aa1cba7ce55898ef9dad8dcc7e8d0eec 
	I1102 14:09:42.689272  471330 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1102 14:09:42.689391  471330 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1102 14:09:42.689407  471330 cni.go:84] Creating CNI manager for ""
	I1102 14:09:42.689415  471330 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 14:09:42.692709  471330 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1102 14:09:42.695599  471330 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1102 14:09:42.702472  471330 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1102 14:09:42.702490  471330 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1102 14:09:42.738212  471330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1102 14:09:43.732196  471330 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1102 14:09:43.732345  471330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:09:43.732429  471330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-873713 minikube.k8s.io/updated_at=2025_11_02T14_09_43_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a minikube.k8s.io/name=old-k8s-version-873713 minikube.k8s.io/primary=true
	I1102 14:09:43.759432  471330 ops.go:34] apiserver oom_adj: -16
	I1102 14:09:43.874287  471330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:09:44.374793  471330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:09:44.875162  471330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:09:45.374328  471330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:09:45.874792  471330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:09:46.374355  471330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:09:46.875382  471330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:09:47.374884  471330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:09:47.874996  471330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:09:48.374697  471330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:09:48.874342  471330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:09:49.374495  471330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:09:49.874856  471330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:09:50.374394  471330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:09:50.874451  471330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:09:51.374893  471330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:09:51.874599  471330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:09:52.374341  471330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:09:52.874659  471330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:09:53.375177  471330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:09:53.874555  471330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:09:54.375009  471330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:09:54.874859  471330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:09:55.375304  471330 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:09:55.497581  471330 kubeadm.go:1114] duration metric: took 11.765294889s to wait for elevateKubeSystemPrivileges
	I1102 14:09:55.497606  471330 kubeadm.go:403] duration metric: took 28.338885382s to StartCluster
	I1102 14:09:55.497623  471330 settings.go:142] acquiring lock: {Name:mk95f66b3b15e63f58f8c9085c1ffe67cc396dc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:09:55.497688  471330 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-293314/kubeconfig
	I1102 14:09:55.498730  471330 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/kubeconfig: {Name:mke5a65554da8fc0fd6a2ea60bed899d5b38ce09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:09:55.498942  471330 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 14:09:55.499092  471330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1102 14:09:55.499361  471330 config.go:182] Loaded profile config "old-k8s-version-873713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1102 14:09:55.499398  471330 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1102 14:09:55.499464  471330 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-873713"
	I1102 14:09:55.499478  471330 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-873713"
	I1102 14:09:55.499499  471330 host.go:66] Checking if "old-k8s-version-873713" exists ...
	I1102 14:09:55.500013  471330 cli_runner.go:164] Run: docker container inspect old-k8s-version-873713 --format={{.State.Status}}
	I1102 14:09:55.500243  471330 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-873713"
	I1102 14:09:55.500268  471330 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-873713"
	I1102 14:09:55.500517  471330 cli_runner.go:164] Run: docker container inspect old-k8s-version-873713 --format={{.State.Status}}
	I1102 14:09:55.502194  471330 out.go:179] * Verifying Kubernetes components...
	I1102 14:09:55.505269  471330 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 14:09:55.532755  471330 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 14:09:55.535864  471330 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 14:09:55.535887  471330 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1102 14:09:55.535971  471330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-873713
	I1102 14:09:55.543709  471330 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-873713"
	I1102 14:09:55.543757  471330 host.go:66] Checking if "old-k8s-version-873713" exists ...
	I1102 14:09:55.544178  471330 cli_runner.go:164] Run: docker container inspect old-k8s-version-873713 --format={{.State.Status}}
	I1102 14:09:55.588798  471330 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1102 14:09:55.588822  471330 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1102 14:09:55.588902  471330 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-873713
	I1102 14:09:55.590412  471330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33421 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/old-k8s-version-873713/id_rsa Username:docker}
	I1102 14:09:55.619628  471330 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33421 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/old-k8s-version-873713/id_rsa Username:docker}
	I1102 14:09:55.748096  471330 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 14:09:55.748325  471330 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1102 14:09:55.771858  471330 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-873713" to be "Ready" ...
	I1102 14:09:55.824021  471330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 14:09:55.827226  471330 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1102 14:09:56.300260  471330 start.go:1013] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1102 14:09:56.804636  471330 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-873713" context rescaled to 1 replicas
	I1102 14:09:56.959135  471330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.131831773s)
	I1102 14:09:56.959395  471330 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.135297351s)
	I1102 14:09:56.978930  471330 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1102 14:09:56.982017  471330 addons.go:515] duration metric: took 1.482594071s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1102 14:09:57.774663  471330 node_ready.go:57] node "old-k8s-version-873713" has "Ready":"False" status (will retry)
	W1102 14:09:59.774854  471330 node_ready.go:57] node "old-k8s-version-873713" has "Ready":"False" status (will retry)
	W1102 14:10:01.775464  471330 node_ready.go:57] node "old-k8s-version-873713" has "Ready":"False" status (will retry)
	W1102 14:10:04.274597  471330 node_ready.go:57] node "old-k8s-version-873713" has "Ready":"False" status (will retry)
	W1102 14:10:06.275558  471330 node_ready.go:57] node "old-k8s-version-873713" has "Ready":"False" status (will retry)
	W1102 14:10:08.776105  471330 node_ready.go:57] node "old-k8s-version-873713" has "Ready":"False" status (will retry)
	I1102 14:10:09.775008  471330 node_ready.go:49] node "old-k8s-version-873713" is "Ready"
	I1102 14:10:09.775038  471330 node_ready.go:38] duration metric: took 14.003146628s for node "old-k8s-version-873713" to be "Ready" ...
	I1102 14:10:09.775053  471330 api_server.go:52] waiting for apiserver process to appear ...
	I1102 14:10:09.775116  471330 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 14:10:09.803032  471330 api_server.go:72] duration metric: took 14.304059577s to wait for apiserver process to appear ...
	I1102 14:10:09.803060  471330 api_server.go:88] waiting for apiserver healthz status ...
	I1102 14:10:09.803081  471330 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1102 14:10:09.811938  471330 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1102 14:10:09.813292  471330 api_server.go:141] control plane version: v1.28.0
	I1102 14:10:09.813318  471330 api_server.go:131] duration metric: took 10.250879ms to wait for apiserver health ...
	I1102 14:10:09.813328  471330 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 14:10:09.817462  471330 system_pods.go:59] 8 kube-system pods found
	I1102 14:10:09.817500  471330 system_pods.go:61] "coredns-5dd5756b68-hjsnd" [61c215fa-e3e2-491e-81db-7995536566a4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 14:10:09.817507  471330 system_pods.go:61] "etcd-old-k8s-version-873713" [467f1715-9bfe-41da-9d5f-9ba1f7629c01] Running
	I1102 14:10:09.817514  471330 system_pods.go:61] "kindnet-d876b" [7f868940-79bf-4df0-bafb-bf8a7810d49c] Running
	I1102 14:10:09.817519  471330 system_pods.go:61] "kube-apiserver-old-k8s-version-873713" [2abe10a7-07a8-4576-94a2-16fc85ce9882] Running
	I1102 14:10:09.817523  471330 system_pods.go:61] "kube-controller-manager-old-k8s-version-873713" [bd70fb33-3889-4f30-b9a3-dac697d4f07b] Running
	I1102 14:10:09.817528  471330 system_pods.go:61] "kube-proxy-ppcp5" [1dcdf645-0c0e-4a04-9869-5e0828d8021e] Running
	I1102 14:10:09.817532  471330 system_pods.go:61] "kube-scheduler-old-k8s-version-873713" [b48e909c-ae0f-4492-beec-7ca5c9269bdf] Running
	I1102 14:10:09.817539  471330 system_pods.go:61] "storage-provisioner" [31f899f9-8c1d-4e07-a986-4ca1dd646947] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 14:10:09.817549  471330 system_pods.go:74] duration metric: took 4.215162ms to wait for pod list to return data ...
	I1102 14:10:09.817565  471330 default_sa.go:34] waiting for default service account to be created ...
	I1102 14:10:09.820098  471330 default_sa.go:45] found service account: "default"
	I1102 14:10:09.820120  471330 default_sa.go:55] duration metric: took 2.549733ms for default service account to be created ...
	I1102 14:10:09.820130  471330 system_pods.go:116] waiting for k8s-apps to be running ...
	I1102 14:10:09.824232  471330 system_pods.go:86] 8 kube-system pods found
	I1102 14:10:09.824266  471330 system_pods.go:89] "coredns-5dd5756b68-hjsnd" [61c215fa-e3e2-491e-81db-7995536566a4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 14:10:09.824273  471330 system_pods.go:89] "etcd-old-k8s-version-873713" [467f1715-9bfe-41da-9d5f-9ba1f7629c01] Running
	I1102 14:10:09.824286  471330 system_pods.go:89] "kindnet-d876b" [7f868940-79bf-4df0-bafb-bf8a7810d49c] Running
	I1102 14:10:09.824294  471330 system_pods.go:89] "kube-apiserver-old-k8s-version-873713" [2abe10a7-07a8-4576-94a2-16fc85ce9882] Running
	I1102 14:10:09.824300  471330 system_pods.go:89] "kube-controller-manager-old-k8s-version-873713" [bd70fb33-3889-4f30-b9a3-dac697d4f07b] Running
	I1102 14:10:09.824314  471330 system_pods.go:89] "kube-proxy-ppcp5" [1dcdf645-0c0e-4a04-9869-5e0828d8021e] Running
	I1102 14:10:09.824319  471330 system_pods.go:89] "kube-scheduler-old-k8s-version-873713" [b48e909c-ae0f-4492-beec-7ca5c9269bdf] Running
	I1102 14:10:09.824330  471330 system_pods.go:89] "storage-provisioner" [31f899f9-8c1d-4e07-a986-4ca1dd646947] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 14:10:09.824355  471330 retry.go:31] will retry after 304.340665ms: missing components: kube-dns
	I1102 14:10:10.132862  471330 system_pods.go:86] 8 kube-system pods found
	I1102 14:10:10.132896  471330 system_pods.go:89] "coredns-5dd5756b68-hjsnd" [61c215fa-e3e2-491e-81db-7995536566a4] Running
	I1102 14:10:10.132904  471330 system_pods.go:89] "etcd-old-k8s-version-873713" [467f1715-9bfe-41da-9d5f-9ba1f7629c01] Running
	I1102 14:10:10.132908  471330 system_pods.go:89] "kindnet-d876b" [7f868940-79bf-4df0-bafb-bf8a7810d49c] Running
	I1102 14:10:10.132913  471330 system_pods.go:89] "kube-apiserver-old-k8s-version-873713" [2abe10a7-07a8-4576-94a2-16fc85ce9882] Running
	I1102 14:10:10.132919  471330 system_pods.go:89] "kube-controller-manager-old-k8s-version-873713" [bd70fb33-3889-4f30-b9a3-dac697d4f07b] Running
	I1102 14:10:10.132923  471330 system_pods.go:89] "kube-proxy-ppcp5" [1dcdf645-0c0e-4a04-9869-5e0828d8021e] Running
	I1102 14:10:10.132929  471330 system_pods.go:89] "kube-scheduler-old-k8s-version-873713" [b48e909c-ae0f-4492-beec-7ca5c9269bdf] Running
	I1102 14:10:10.132933  471330 system_pods.go:89] "storage-provisioner" [31f899f9-8c1d-4e07-a986-4ca1dd646947] Running
	I1102 14:10:10.132942  471330 system_pods.go:126] duration metric: took 312.804046ms to wait for k8s-apps to be running ...
	I1102 14:10:10.132955  471330 system_svc.go:44] waiting for kubelet service to be running ....
	I1102 14:10:10.133014  471330 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 14:10:10.152492  471330 system_svc.go:56] duration metric: took 19.516449ms WaitForService to wait for kubelet
	I1102 14:10:10.152535  471330 kubeadm.go:587] duration metric: took 14.653567818s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 14:10:10.152581  471330 node_conditions.go:102] verifying NodePressure condition ...
	I1102 14:10:10.156070  471330 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1102 14:10:10.156103  471330 node_conditions.go:123] node cpu capacity is 2
	I1102 14:10:10.156118  471330 node_conditions.go:105] duration metric: took 3.525009ms to run NodePressure ...
	I1102 14:10:10.156131  471330 start.go:242] waiting for startup goroutines ...
	I1102 14:10:10.156138  471330 start.go:247] waiting for cluster config update ...
	I1102 14:10:10.156150  471330 start.go:256] writing updated cluster config ...
	I1102 14:10:10.156448  471330 ssh_runner.go:195] Run: rm -f paused
	I1102 14:10:10.160374  471330 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 14:10:10.165147  471330 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-hjsnd" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:10:10.172045  471330 pod_ready.go:94] pod "coredns-5dd5756b68-hjsnd" is "Ready"
	I1102 14:10:10.172085  471330 pod_ready.go:86] duration metric: took 6.905991ms for pod "coredns-5dd5756b68-hjsnd" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:10:10.175665  471330 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-873713" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:10:10.181357  471330 pod_ready.go:94] pod "etcd-old-k8s-version-873713" is "Ready"
	I1102 14:10:10.181389  471330 pod_ready.go:86] duration metric: took 5.694702ms for pod "etcd-old-k8s-version-873713" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:10:10.184976  471330 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-873713" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:10:10.190986  471330 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-873713" is "Ready"
	I1102 14:10:10.191067  471330 pod_ready.go:86] duration metric: took 6.061138ms for pod "kube-apiserver-old-k8s-version-873713" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:10:10.194763  471330 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-873713" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:10:10.564269  471330 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-873713" is "Ready"
	I1102 14:10:10.564298  471330 pod_ready.go:86] duration metric: took 369.506427ms for pod "kube-controller-manager-old-k8s-version-873713" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:10:10.765220  471330 pod_ready.go:83] waiting for pod "kube-proxy-ppcp5" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:10:11.164767  471330 pod_ready.go:94] pod "kube-proxy-ppcp5" is "Ready"
	I1102 14:10:11.164802  471330 pod_ready.go:86] duration metric: took 399.557452ms for pod "kube-proxy-ppcp5" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:10:11.365664  471330 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-873713" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:10:11.764311  471330 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-873713" is "Ready"
	I1102 14:10:11.764334  471330 pod_ready.go:86] duration metric: took 398.643704ms for pod "kube-scheduler-old-k8s-version-873713" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:10:11.764347  471330 pod_ready.go:40] duration metric: took 1.603937083s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 14:10:11.828486  471330 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1102 14:10:11.831713  471330 out.go:203] 
	W1102 14:10:11.834569  471330 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1102 14:10:11.837587  471330 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1102 14:10:11.841473  471330 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-873713" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 02 14:10:09 old-k8s-version-873713 crio[872]: time="2025-11-02T14:10:09.719959417Z" level=info msg="Created container c6caf1c024f323ce3a1d93c9a0707f6fc6d7d692cdbcc1137f8a40d5ea5ef961: kube-system/coredns-5dd5756b68-hjsnd/coredns" id=30d3381d-a712-418c-bf6a-82d63d1826c4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:10:09 old-k8s-version-873713 crio[872]: time="2025-11-02T14:10:09.722852225Z" level=info msg="Starting container: c6caf1c024f323ce3a1d93c9a0707f6fc6d7d692cdbcc1137f8a40d5ea5ef961" id=31adeab8-f933-4f4f-bded-46c48ed1f293 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 14:10:09 old-k8s-version-873713 crio[872]: time="2025-11-02T14:10:09.728887319Z" level=info msg="Started container" PID=1954 containerID=c6caf1c024f323ce3a1d93c9a0707f6fc6d7d692cdbcc1137f8a40d5ea5ef961 description=kube-system/coredns-5dd5756b68-hjsnd/coredns id=31adeab8-f933-4f4f-bded-46c48ed1f293 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0dfa943ce0661aa8951aa5c75ab8ad369604028ad20a018aa534c76fbafe2365
	Nov 02 14:10:12 old-k8s-version-873713 crio[872]: time="2025-11-02T14:10:12.353027217Z" level=info msg="Running pod sandbox: default/busybox/POD" id=7203423d-9b31-427b-b91f-6cdf39231910 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 14:10:12 old-k8s-version-873713 crio[872]: time="2025-11-02T14:10:12.353103837Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:10:12 old-k8s-version-873713 crio[872]: time="2025-11-02T14:10:12.358437422Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:34696a699cfd8f065ac11eba553e90ff7d6192bb0c946442e16e39dfe3d9715b UID:96331336-0f9f-4a4a-aecf-1aac5a7191da NetNS:/var/run/netns/566ccf8a-2dc6-4f4f-aa8c-2fd9099ee07b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40025ea8c0}] Aliases:map[]}"
	Nov 02 14:10:12 old-k8s-version-873713 crio[872]: time="2025-11-02T14:10:12.359273496Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 02 14:10:12 old-k8s-version-873713 crio[872]: time="2025-11-02T14:10:12.370870295Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:34696a699cfd8f065ac11eba553e90ff7d6192bb0c946442e16e39dfe3d9715b UID:96331336-0f9f-4a4a-aecf-1aac5a7191da NetNS:/var/run/netns/566ccf8a-2dc6-4f4f-aa8c-2fd9099ee07b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40025ea8c0}] Aliases:map[]}"
	Nov 02 14:10:12 old-k8s-version-873713 crio[872]: time="2025-11-02T14:10:12.37104212Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 02 14:10:12 old-k8s-version-873713 crio[872]: time="2025-11-02T14:10:12.374067893Z" level=info msg="Ran pod sandbox 34696a699cfd8f065ac11eba553e90ff7d6192bb0c946442e16e39dfe3d9715b with infra container: default/busybox/POD" id=7203423d-9b31-427b-b91f-6cdf39231910 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 14:10:12 old-k8s-version-873713 crio[872]: time="2025-11-02T14:10:12.375394694Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8344dc14-6fb7-45d1-96ed-72373df1c0b0 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:10:12 old-k8s-version-873713 crio[872]: time="2025-11-02T14:10:12.375557189Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=8344dc14-6fb7-45d1-96ed-72373df1c0b0 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:10:12 old-k8s-version-873713 crio[872]: time="2025-11-02T14:10:12.375610941Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=8344dc14-6fb7-45d1-96ed-72373df1c0b0 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:10:12 old-k8s-version-873713 crio[872]: time="2025-11-02T14:10:12.378138775Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0275de07-9964-481a-bf58-a41d803a31e0 name=/runtime.v1.ImageService/PullImage
	Nov 02 14:10:12 old-k8s-version-873713 crio[872]: time="2025-11-02T14:10:12.38102556Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 02 14:10:14 old-k8s-version-873713 crio[872]: time="2025-11-02T14:10:14.55591575Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=0275de07-9964-481a-bf58-a41d803a31e0 name=/runtime.v1.ImageService/PullImage
	Nov 02 14:10:14 old-k8s-version-873713 crio[872]: time="2025-11-02T14:10:14.556900256Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a737195b-d606-4863-934d-638fe8b87b0b name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:10:14 old-k8s-version-873713 crio[872]: time="2025-11-02T14:10:14.558599145Z" level=info msg="Creating container: default/busybox/busybox" id=4c0b2110-e6b0-4352-a381-2b4becb4c528 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:10:14 old-k8s-version-873713 crio[872]: time="2025-11-02T14:10:14.558779388Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:10:14 old-k8s-version-873713 crio[872]: time="2025-11-02T14:10:14.565540304Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:10:14 old-k8s-version-873713 crio[872]: time="2025-11-02T14:10:14.566040319Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:10:14 old-k8s-version-873713 crio[872]: time="2025-11-02T14:10:14.581241603Z" level=info msg="Created container 420588aebac3119ed238bd2e1721df280a0fcee85f27107b0289a42e83cd313a: default/busybox/busybox" id=4c0b2110-e6b0-4352-a381-2b4becb4c528 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:10:14 old-k8s-version-873713 crio[872]: time="2025-11-02T14:10:14.582117735Z" level=info msg="Starting container: 420588aebac3119ed238bd2e1721df280a0fcee85f27107b0289a42e83cd313a" id=33e58be7-91fd-4ccb-b9cf-d8266ab1eeaa name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 14:10:14 old-k8s-version-873713 crio[872]: time="2025-11-02T14:10:14.583930586Z" level=info msg="Started container" PID=2008 containerID=420588aebac3119ed238bd2e1721df280a0fcee85f27107b0289a42e83cd313a description=default/busybox/busybox id=33e58be7-91fd-4ccb-b9cf-d8266ab1eeaa name=/runtime.v1.RuntimeService/StartContainer sandboxID=34696a699cfd8f065ac11eba553e90ff7d6192bb0c946442e16e39dfe3d9715b
	Nov 02 14:10:21 old-k8s-version-873713 crio[872]: time="2025-11-02T14:10:21.232448164Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	420588aebac31       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   34696a699cfd8       busybox                                          default
	c6caf1c024f32       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                      12 seconds ago      Running             coredns                   0                   0dfa943ce0661       coredns-5dd5756b68-hjsnd                         kube-system
	57c5dd749b11f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      13 seconds ago      Running             storage-provisioner       0                   26c8e4d643fd2       storage-provisioner                              kube-system
	06ec41fa9b97d       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    23 seconds ago      Running             kindnet-cni               0                   a384007002b3c       kindnet-d876b                                    kube-system
	5076cf52e7f77       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                      26 seconds ago      Running             kube-proxy                0                   5ab0c93e55761       kube-proxy-ppcp5                                 kube-system
	8d947c37368ab       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                      47 seconds ago      Running             etcd                      0                   16abe57f01a89       etcd-old-k8s-version-873713                      kube-system
	7bfce358c5396       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                      47 seconds ago      Running             kube-apiserver            0                   15d48699cf6ff       kube-apiserver-old-k8s-version-873713            kube-system
	d4d22391497a3       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                      47 seconds ago      Running             kube-scheduler            0                   52d88361632ec       kube-scheduler-old-k8s-version-873713            kube-system
	d3a2de04e7b00       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                      47 seconds ago      Running             kube-controller-manager   0                   5cddd94328f16       kube-controller-manager-old-k8s-version-873713   kube-system
	
	
	==> coredns [c6caf1c024f323ce3a1d93c9a0707f6fc6d7d692cdbcc1137f8a40d5ea5ef961] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54870 - 37105 "HINFO IN 4952098071158573940.7008170972539734263. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.034940279s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-873713
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-873713
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a
	                    minikube.k8s.io/name=old-k8s-version-873713
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_02T14_09_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 02 Nov 2025 14:09:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-873713
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 02 Nov 2025 14:10:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 02 Nov 2025 14:10:13 +0000   Sun, 02 Nov 2025 14:09:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 02 Nov 2025 14:10:13 +0000   Sun, 02 Nov 2025 14:09:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 02 Nov 2025 14:10:13 +0000   Sun, 02 Nov 2025 14:09:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 02 Nov 2025 14:10:13 +0000   Sun, 02 Nov 2025 14:10:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-873713
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                316cad18-282c-471f-8314-7b2e61711c14
	  Boot ID:                    4c303cf4-c0b6-4c0d-aa1a-d7509feb15e7
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-hjsnd                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     27s
	  kube-system                 etcd-old-k8s-version-873713                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         42s
	  kube-system                 kindnet-d876b                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-old-k8s-version-873713             250m (12%)    0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-controller-manager-old-k8s-version-873713    200m (10%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-proxy-ppcp5                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-old-k8s-version-873713             100m (5%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 40s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s   kubelet          Node old-k8s-version-873713 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s   kubelet          Node old-k8s-version-873713 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s   kubelet          Node old-k8s-version-873713 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node old-k8s-version-873713 event: Registered Node old-k8s-version-873713 in Controller
	  Normal  NodeReady                13s   kubelet          Node old-k8s-version-873713 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 2 13:45] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:49] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:50] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:51] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:52] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:54] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:55] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:56] overlayfs: idmapped layers are currently not supported
	[  +3.515963] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:57] overlayfs: idmapped layers are currently not supported
	[ +24.836033] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:58] overlayfs: idmapped layers are currently not supported
	[ +23.362553] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:59] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:01] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:02] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:03] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:04] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:06] overlayfs: idmapped layers are currently not supported
	[ +50.469589] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 2 14:07] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:08] overlayfs: idmapped layers are currently not supported
	[ +11.089512] overlayfs: idmapped layers are currently not supported
	[ +33.821233] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:09] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [8d947c37368ab08f807260ad7919edfa351c47800262b943a9b5914863a812d9] <==
	{"level":"info","ts":"2025-11-02T14:09:35.025678Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-02T14:09:35.025755Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-02T14:09:35.025926Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-02T14:09:35.026004Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-02T14:09:35.026037Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-02T14:09:35.026285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-11-02T14:09:35.026393Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-02T14:09:35.468093Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-02T14:09:35.468203Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-02T14:09:35.468267Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2025-11-02T14:09:35.468306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2025-11-02T14:09:35.468342Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-02T14:09:35.468382Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2025-11-02T14:09:35.468422Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-02T14:09:35.470785Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-873713 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-02T14:09:35.471035Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-02T14:09:35.474624Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-02T14:09:35.475728Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-02T14:09:35.475825Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-02T14:09:35.476669Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-02T14:09:35.480066Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-02T14:09:35.504793Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-02T14:09:35.504928Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-02T14:09:35.480093Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-02T14:09:35.513054Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 14:10:22 up  2:52,  0 user,  load average: 2.81, 3.52, 2.82
	Linux old-k8s-version-873713 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [06ec41fa9b97df172776dfe20c554c50f2650f816ac504564a6fb7f21f49e797] <==
	I1102 14:09:59.010921       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1102 14:09:59.011324       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1102 14:09:59.011516       1 main.go:148] setting mtu 1500 for CNI 
	I1102 14:09:59.011563       1 main.go:178] kindnetd IP family: "ipv4"
	I1102 14:09:59.011604       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-02T14:09:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1102 14:09:59.213064       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1102 14:09:59.213091       1 controller.go:381] "Waiting for informer caches to sync"
	I1102 14:09:59.213100       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1102 14:09:59.213199       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1102 14:09:59.513482       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1102 14:09:59.513623       1 metrics.go:72] Registering metrics
	I1102 14:09:59.513698       1 controller.go:711] "Syncing nftables rules"
	I1102 14:10:09.218705       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1102 14:10:09.218762       1 main.go:301] handling current node
	I1102 14:10:19.216452       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1102 14:10:19.216487       1 main.go:301] handling current node
	
	
	==> kube-apiserver [7bfce358c5396432ab19ef789a78c38e4ac235a270896e93433e7be03bc94c61] <==
	I1102 14:09:39.189508       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1102 14:09:39.189534       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1102 14:09:39.195078       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1102 14:09:39.204164       1 controller.go:624] quota admission added evaluator for: namespaces
	I1102 14:09:39.253235       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1102 14:09:39.257665       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1102 14:09:39.257699       1 aggregator.go:166] initial CRD sync complete...
	I1102 14:09:39.257706       1 autoregister_controller.go:141] Starting autoregister controller
	I1102 14:09:39.257712       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1102 14:09:39.257718       1 cache.go:39] Caches are synced for autoregister controller
	I1102 14:09:39.960284       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1102 14:09:39.965218       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1102 14:09:39.965246       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1102 14:09:40.681995       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1102 14:09:40.737400       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1102 14:09:40.826111       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1102 14:09:40.835722       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1102 14:09:40.836975       1 controller.go:624] quota admission added evaluator for: endpoints
	I1102 14:09:40.842470       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1102 14:09:41.142243       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1102 14:09:42.605077       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1102 14:09:42.620612       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1102 14:09:42.635564       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1102 14:09:54.464745       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1102 14:09:55.143103       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [d3a2de04e7b004fec34d7b6ca22c97963b8fd3ddff101a708ccb7350b4dd9aac] <==
	I1102 14:09:54.539195       1 shared_informer.go:318] Caches are synced for cronjob
	I1102 14:09:54.586570       1 shared_informer.go:318] Caches are synced for job
	I1102 14:09:54.587635       1 shared_informer.go:318] Caches are synced for TTL after finished
	I1102 14:09:54.592480       1 shared_informer.go:318] Caches are synced for resource quota
	I1102 14:09:54.935561       1 shared_informer.go:318] Caches are synced for garbage collector
	I1102 14:09:54.935592       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1102 14:09:54.957563       1 shared_informer.go:318] Caches are synced for garbage collector
	I1102 14:09:55.159573       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-d876b"
	I1102 14:09:55.164495       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-ppcp5"
	I1102 14:09:55.367682       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-d9qsh"
	I1102 14:09:55.388482       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-hjsnd"
	I1102 14:09:55.411297       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="936.262237ms"
	I1102 14:09:55.423501       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.148011ms"
	I1102 14:09:55.423589       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="48.591µs"
	I1102 14:09:56.349218       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1102 14:09:56.381811       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-d9qsh"
	I1102 14:09:56.401335       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="52.850701ms"
	I1102 14:09:56.429445       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="28.058496ms"
	I1102 14:09:56.440128       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="95.41µs"
	I1102 14:10:09.326714       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="121.715µs"
	I1102 14:10:09.363798       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="87.385µs"
	I1102 14:10:09.386510       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1102 14:10:09.984673       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="84.317µs"
	I1102 14:10:10.039188       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.884828ms"
	I1102 14:10:10.039385       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="153.084µs"
	
	
	==> kube-proxy [5076cf52e7f77c0a5ac35dcc940d16218bf5c36b038622b14f532a09bf70ee84] <==
	I1102 14:09:56.470403       1 server_others.go:69] "Using iptables proxy"
	I1102 14:09:56.489894       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1102 14:09:56.536087       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 14:09:56.538387       1 server_others.go:152] "Using iptables Proxier"
	I1102 14:09:56.538478       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1102 14:09:56.538511       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1102 14:09:56.538568       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1102 14:09:56.540009       1 server.go:846] "Version info" version="v1.28.0"
	I1102 14:09:56.540251       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 14:09:56.541044       1 config.go:188] "Starting service config controller"
	I1102 14:09:56.541110       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1102 14:09:56.541160       1 config.go:97] "Starting endpoint slice config controller"
	I1102 14:09:56.541190       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1102 14:09:56.542536       1 config.go:315] "Starting node config controller"
	I1102 14:09:56.542604       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1102 14:09:56.641928       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1102 14:09:56.641972       1 shared_informer.go:318] Caches are synced for service config
	I1102 14:09:56.643704       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [d4d22391497a31b07fa62a4b01e3676b47de80da8734a24251ffbc54f893c7ce] <==
	W1102 14:09:39.238987       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1102 14:09:39.239397       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1102 14:09:40.041846       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1102 14:09:40.041982       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1102 14:09:40.107172       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1102 14:09:40.107308       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1102 14:09:40.198642       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1102 14:09:40.198683       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1102 14:09:40.198702       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1102 14:09:40.198713       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1102 14:09:40.251873       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1102 14:09:40.251979       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1102 14:09:40.264130       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1102 14:09:40.264175       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1102 14:09:40.281095       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1102 14:09:40.281134       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1102 14:09:40.365352       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1102 14:09:40.365399       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1102 14:09:40.436391       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1102 14:09:40.436494       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1102 14:09:40.438492       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1102 14:09:40.438584       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1102 14:09:40.674033       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1102 14:09:40.674071       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1102 14:09:42.413958       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 02 14:09:55 old-k8s-version-873713 kubelet[1391]: I1102 14:09:55.233161    1391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1dcdf645-0c0e-4a04-9869-5e0828d8021e-lib-modules\") pod \"kube-proxy-ppcp5\" (UID: \"1dcdf645-0c0e-4a04-9869-5e0828d8021e\") " pod="kube-system/kube-proxy-ppcp5"
	Nov 02 14:09:55 old-k8s-version-873713 kubelet[1391]: I1102 14:09:55.233238    1391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f868940-79bf-4df0-bafb-bf8a7810d49c-lib-modules\") pod \"kindnet-d876b\" (UID: \"7f868940-79bf-4df0-bafb-bf8a7810d49c\") " pod="kube-system/kindnet-d876b"
	Nov 02 14:09:55 old-k8s-version-873713 kubelet[1391]: I1102 14:09:55.233323    1391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1dcdf645-0c0e-4a04-9869-5e0828d8021e-kube-proxy\") pod \"kube-proxy-ppcp5\" (UID: \"1dcdf645-0c0e-4a04-9869-5e0828d8021e\") " pod="kube-system/kube-proxy-ppcp5"
	Nov 02 14:09:55 old-k8s-version-873713 kubelet[1391]: E1102 14:09:55.352482    1391 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 02 14:09:55 old-k8s-version-873713 kubelet[1391]: E1102 14:09:55.352687    1391 projected.go:198] Error preparing data for projected volume kube-api-access-5nqqg for pod kube-system/kube-proxy-ppcp5: configmap "kube-root-ca.crt" not found
	Nov 02 14:09:55 old-k8s-version-873713 kubelet[1391]: E1102 14:09:55.352856    1391 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1dcdf645-0c0e-4a04-9869-5e0828d8021e-kube-api-access-5nqqg podName:1dcdf645-0c0e-4a04-9869-5e0828d8021e nodeName:}" failed. No retries permitted until 2025-11-02 14:09:55.852816344 +0000 UTC m=+13.289304451 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5nqqg" (UniqueName: "kubernetes.io/projected/1dcdf645-0c0e-4a04-9869-5e0828d8021e-kube-api-access-5nqqg") pod "kube-proxy-ppcp5" (UID: "1dcdf645-0c0e-4a04-9869-5e0828d8021e") : configmap "kube-root-ca.crt" not found
	Nov 02 14:09:55 old-k8s-version-873713 kubelet[1391]: E1102 14:09:55.354660    1391 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 02 14:09:55 old-k8s-version-873713 kubelet[1391]: E1102 14:09:55.354812    1391 projected.go:198] Error preparing data for projected volume kube-api-access-drncv for pod kube-system/kindnet-d876b: configmap "kube-root-ca.crt" not found
	Nov 02 14:09:55 old-k8s-version-873713 kubelet[1391]: E1102 14:09:55.354934    1391 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7f868940-79bf-4df0-bafb-bf8a7810d49c-kube-api-access-drncv podName:7f868940-79bf-4df0-bafb-bf8a7810d49c nodeName:}" failed. No retries permitted until 2025-11-02 14:09:55.854912724 +0000 UTC m=+13.291400832 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-drncv" (UniqueName: "kubernetes.io/projected/7f868940-79bf-4df0-bafb-bf8a7810d49c-kube-api-access-drncv") pod "kindnet-d876b" (UID: "7f868940-79bf-4df0-bafb-bf8a7810d49c") : configmap "kube-root-ca.crt" not found
	Nov 02 14:09:56 old-k8s-version-873713 kubelet[1391]: W1102 14:09:56.095972    1391 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/4ee7404b4a6ac58ace4f7c3005215815a212bd879029eed27a35560a4c57fe56/crio-a384007002b3c5e9062b2a917d3ef392f0f7e7d1a7a3a7245c3833e94f5e18ce WatchSource:0}: Error finding container a384007002b3c5e9062b2a917d3ef392f0f7e7d1a7a3a7245c3833e94f5e18ce: Status 404 returned error can't find the container with id a384007002b3c5e9062b2a917d3ef392f0f7e7d1a7a3a7245c3833e94f5e18ce
	Nov 02 14:09:56 old-k8s-version-873713 kubelet[1391]: W1102 14:09:56.097962    1391 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/4ee7404b4a6ac58ace4f7c3005215815a212bd879029eed27a35560a4c57fe56/crio-5ab0c93e557613808507f00ba42026f5a148334942e1eeffdb543bce31f34801 WatchSource:0}: Error finding container 5ab0c93e557613808507f00ba42026f5a148334942e1eeffdb543bce31f34801: Status 404 returned error can't find the container with id 5ab0c93e557613808507f00ba42026f5a148334942e1eeffdb543bce31f34801
	Nov 02 14:09:56 old-k8s-version-873713 kubelet[1391]: I1102 14:09:56.955565    1391 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-ppcp5" podStartSLOduration=1.955479422 podCreationTimestamp="2025-11-02 14:09:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 14:09:56.953976874 +0000 UTC m=+14.390464999" watchObservedRunningTime="2025-11-02 14:09:56.955479422 +0000 UTC m=+14.391967530"
	Nov 02 14:10:02 old-k8s-version-873713 kubelet[1391]: I1102 14:10:02.804262    1391 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-d876b" podStartSLOduration=5.054200741 podCreationTimestamp="2025-11-02 14:09:55 +0000 UTC" firstStartedPulling="2025-11-02 14:09:56.099767568 +0000 UTC m=+13.536255676" lastFinishedPulling="2025-11-02 14:09:58.849784278 +0000 UTC m=+16.286272394" observedRunningTime="2025-11-02 14:09:58.9532252 +0000 UTC m=+16.389713316" watchObservedRunningTime="2025-11-02 14:10:02.804217459 +0000 UTC m=+20.240705575"
	Nov 02 14:10:09 old-k8s-version-873713 kubelet[1391]: I1102 14:10:09.278046    1391 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 02 14:10:09 old-k8s-version-873713 kubelet[1391]: I1102 14:10:09.317517    1391 topology_manager.go:215] "Topology Admit Handler" podUID="31f899f9-8c1d-4e07-a986-4ca1dd646947" podNamespace="kube-system" podName="storage-provisioner"
	Nov 02 14:10:09 old-k8s-version-873713 kubelet[1391]: I1102 14:10:09.324558    1391 topology_manager.go:215] "Topology Admit Handler" podUID="61c215fa-e3e2-491e-81db-7995536566a4" podNamespace="kube-system" podName="coredns-5dd5756b68-hjsnd"
	Nov 02 14:10:09 old-k8s-version-873713 kubelet[1391]: I1102 14:10:09.385933    1391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/31f899f9-8c1d-4e07-a986-4ca1dd646947-tmp\") pod \"storage-provisioner\" (UID: \"31f899f9-8c1d-4e07-a986-4ca1dd646947\") " pod="kube-system/storage-provisioner"
	Nov 02 14:10:09 old-k8s-version-873713 kubelet[1391]: I1102 14:10:09.386171    1391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x248f\" (UniqueName: \"kubernetes.io/projected/31f899f9-8c1d-4e07-a986-4ca1dd646947-kube-api-access-x248f\") pod \"storage-provisioner\" (UID: \"31f899f9-8c1d-4e07-a986-4ca1dd646947\") " pod="kube-system/storage-provisioner"
	Nov 02 14:10:09 old-k8s-version-873713 kubelet[1391]: I1102 14:10:09.386284    1391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sbl2\" (UniqueName: \"kubernetes.io/projected/61c215fa-e3e2-491e-81db-7995536566a4-kube-api-access-5sbl2\") pod \"coredns-5dd5756b68-hjsnd\" (UID: \"61c215fa-e3e2-491e-81db-7995536566a4\") " pod="kube-system/coredns-5dd5756b68-hjsnd"
	Nov 02 14:10:09 old-k8s-version-873713 kubelet[1391]: I1102 14:10:09.386429    1391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61c215fa-e3e2-491e-81db-7995536566a4-config-volume\") pod \"coredns-5dd5756b68-hjsnd\" (UID: \"61c215fa-e3e2-491e-81db-7995536566a4\") " pod="kube-system/coredns-5dd5756b68-hjsnd"
	Nov 02 14:10:09 old-k8s-version-873713 kubelet[1391]: W1102 14:10:09.630351    1391 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/4ee7404b4a6ac58ace4f7c3005215815a212bd879029eed27a35560a4c57fe56/crio-26c8e4d643fd2da221e16c0264a633b9e858c38bcf6114b265807a2bc7900bb5 WatchSource:0}: Error finding container 26c8e4d643fd2da221e16c0264a633b9e858c38bcf6114b265807a2bc7900bb5: Status 404 returned error can't find the container with id 26c8e4d643fd2da221e16c0264a633b9e858c38bcf6114b265807a2bc7900bb5
	Nov 02 14:10:09 old-k8s-version-873713 kubelet[1391]: I1102 14:10:09.981523    1391 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-hjsnd" podStartSLOduration=14.981481372 podCreationTimestamp="2025-11-02 14:09:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 14:10:09.980726053 +0000 UTC m=+27.417214177" watchObservedRunningTime="2025-11-02 14:10:09.981481372 +0000 UTC m=+27.417969488"
	Nov 02 14:10:10 old-k8s-version-873713 kubelet[1391]: I1102 14:10:10.024684    1391 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.02453533 podCreationTimestamp="2025-11-02 14:09:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 14:10:10.004467567 +0000 UTC m=+27.440955691" watchObservedRunningTime="2025-11-02 14:10:10.02453533 +0000 UTC m=+27.461023446"
	Nov 02 14:10:12 old-k8s-version-873713 kubelet[1391]: I1102 14:10:12.050307    1391 topology_manager.go:215] "Topology Admit Handler" podUID="96331336-0f9f-4a4a-aecf-1aac5a7191da" podNamespace="default" podName="busybox"
	Nov 02 14:10:12 old-k8s-version-873713 kubelet[1391]: I1102 14:10:12.102951    1391 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx4z8\" (UniqueName: \"kubernetes.io/projected/96331336-0f9f-4a4a-aecf-1aac5a7191da-kube-api-access-rx4z8\") pod \"busybox\" (UID: \"96331336-0f9f-4a4a-aecf-1aac5a7191da\") " pod="default/busybox"
	
	
	==> storage-provisioner [57c5dd749b11f230cd5ac5464de0a64559fe6384b4e7c7cd54cca650c6987810] <==
	I1102 14:10:09.697131       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1102 14:10:09.716412       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1102 14:10:09.716478       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1102 14:10:09.736197       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1102 14:10:09.738138       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-873713_9d09b93d-7966-4dc8-9f02-65ab4caffee9!
	I1102 14:10:09.739442       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bed908b3-2f1a-4e3a-8d32-4d7ab52fe965", APIVersion:"v1", ResourceVersion:"411", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-873713_9d09b93d-7966-4dc8-9f02-65ab4caffee9 became leader
	I1102 14:10:09.838606       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-873713_9d09b93d-7966-4dc8-9f02-65ab4caffee9!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-873713 -n old-k8s-version-873713
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-873713 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-873713 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p old-k8s-version-873713 --alsologtostderr -v=1: exit status 80 (1.839294406s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-873713 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 14:11:40.721849  477405 out.go:360] Setting OutFile to fd 1 ...
	I1102 14:11:40.721981  477405 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:11:40.721993  477405 out.go:374] Setting ErrFile to fd 2...
	I1102 14:11:40.721998  477405 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:11:40.722267  477405 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 14:11:40.722543  477405 out.go:368] Setting JSON to false
	I1102 14:11:40.722581  477405 mustload.go:66] Loading cluster: old-k8s-version-873713
	I1102 14:11:40.723018  477405 config.go:182] Loaded profile config "old-k8s-version-873713": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1102 14:11:40.723536  477405 cli_runner.go:164] Run: docker container inspect old-k8s-version-873713 --format={{.State.Status}}
	I1102 14:11:40.740571  477405 host.go:66] Checking if "old-k8s-version-873713" exists ...
	I1102 14:11:40.740906  477405 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:11:40.798942  477405 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-11-02 14:11:40.788986179 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:11:40.799593  477405 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-873713 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1102 14:11:40.803094  477405 out.go:179] * Pausing node old-k8s-version-873713 ... 
	I1102 14:11:40.806022  477405 host.go:66] Checking if "old-k8s-version-873713" exists ...
	I1102 14:11:40.806378  477405 ssh_runner.go:195] Run: systemctl --version
	I1102 14:11:40.806494  477405 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-873713
	I1102 14:11:40.825314  477405 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33426 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/old-k8s-version-873713/id_rsa Username:docker}
	I1102 14:11:40.937656  477405 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 14:11:40.955772  477405 pause.go:52] kubelet running: true
	I1102 14:11:40.955851  477405 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1102 14:11:41.234683  477405 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1102 14:11:41.234786  477405 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1102 14:11:41.303326  477405 cri.go:89] found id: "6cf1293ad607139235172951e0f6dd2839ab7dfdb284bac7be5c819c3ae0637b"
	I1102 14:11:41.303354  477405 cri.go:89] found id: "0568c5f06769d347955fc5d3451f1eac160d343349b9500cbd4c89a18f6916ca"
	I1102 14:11:41.303360  477405 cri.go:89] found id: "e206f6558a61d96d9da578d9c25b7fe05996f143a18ece7b8dadbb7f9822b039"
	I1102 14:11:41.303363  477405 cri.go:89] found id: "05ea50ffb17a2c17ae3167c803ef030a793314febf71bcbb6e67e304616db8d1"
	I1102 14:11:41.303366  477405 cri.go:89] found id: "b4d30d38fb366868b37ff296a262960cad053ff1896f9efd34edc961fdef64cc"
	I1102 14:11:41.303370  477405 cri.go:89] found id: "e3009eb696e3a3f606e23a07d321eac52a28e08956e2979dcb8e66d449bc6d55"
	I1102 14:11:41.303374  477405 cri.go:89] found id: "6e8cd97cdf9ad0306b693f6b2aff921d70b038a810142c363024afddee477af3"
	I1102 14:11:41.303377  477405 cri.go:89] found id: "7cbe729897f5594064f2041cb37d3a309e248b10de54a0077bb7ab8a192cbf98"
	I1102 14:11:41.303380  477405 cri.go:89] found id: "318072e43c5b6485705d146666c86c912d3dc14570e9880f1b2c467090c6391b"
	I1102 14:11:41.303387  477405 cri.go:89] found id: "0715d806febbf6bc068ca3b10c928c40a83c034a3efe46db9df4e13ebd053192"
	I1102 14:11:41.303393  477405 cri.go:89] found id: "4e56dc5991fa15eba86cd1282a7f01dec8198efa8716d7d7b2cdc4ef02f81353"
	I1102 14:11:41.303396  477405 cri.go:89] found id: ""
	I1102 14:11:41.303446  477405 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 14:11:41.322152  477405 retry.go:31] will retry after 340.983168ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T14:11:41Z" level=error msg="open /run/runc: no such file or directory"
	I1102 14:11:41.663781  477405 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 14:11:41.676855  477405 pause.go:52] kubelet running: false
	I1102 14:11:41.676922  477405 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1102 14:11:41.842221  477405 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1102 14:11:41.842321  477405 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1102 14:11:41.921892  477405 cri.go:89] found id: "6cf1293ad607139235172951e0f6dd2839ab7dfdb284bac7be5c819c3ae0637b"
	I1102 14:11:41.921913  477405 cri.go:89] found id: "0568c5f06769d347955fc5d3451f1eac160d343349b9500cbd4c89a18f6916ca"
	I1102 14:11:41.921919  477405 cri.go:89] found id: "e206f6558a61d96d9da578d9c25b7fe05996f143a18ece7b8dadbb7f9822b039"
	I1102 14:11:41.921923  477405 cri.go:89] found id: "05ea50ffb17a2c17ae3167c803ef030a793314febf71bcbb6e67e304616db8d1"
	I1102 14:11:41.921927  477405 cri.go:89] found id: "b4d30d38fb366868b37ff296a262960cad053ff1896f9efd34edc961fdef64cc"
	I1102 14:11:41.921931  477405 cri.go:89] found id: "e3009eb696e3a3f606e23a07d321eac52a28e08956e2979dcb8e66d449bc6d55"
	I1102 14:11:41.921934  477405 cri.go:89] found id: "6e8cd97cdf9ad0306b693f6b2aff921d70b038a810142c363024afddee477af3"
	I1102 14:11:41.921938  477405 cri.go:89] found id: "7cbe729897f5594064f2041cb37d3a309e248b10de54a0077bb7ab8a192cbf98"
	I1102 14:11:41.921941  477405 cri.go:89] found id: "318072e43c5b6485705d146666c86c912d3dc14570e9880f1b2c467090c6391b"
	I1102 14:11:41.921948  477405 cri.go:89] found id: "0715d806febbf6bc068ca3b10c928c40a83c034a3efe46db9df4e13ebd053192"
	I1102 14:11:41.921952  477405 cri.go:89] found id: "4e56dc5991fa15eba86cd1282a7f01dec8198efa8716d7d7b2cdc4ef02f81353"
	I1102 14:11:41.921955  477405 cri.go:89] found id: ""
	I1102 14:11:41.922005  477405 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 14:11:41.933038  477405 retry.go:31] will retry after 281.894672ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T14:11:41Z" level=error msg="open /run/runc: no such file or directory"
	I1102 14:11:42.215459  477405 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 14:11:42.231476  477405 pause.go:52] kubelet running: false
	I1102 14:11:42.231552  477405 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1102 14:11:42.406359  477405 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1102 14:11:42.406515  477405 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1102 14:11:42.478169  477405 cri.go:89] found id: "6cf1293ad607139235172951e0f6dd2839ab7dfdb284bac7be5c819c3ae0637b"
	I1102 14:11:42.478194  477405 cri.go:89] found id: "0568c5f06769d347955fc5d3451f1eac160d343349b9500cbd4c89a18f6916ca"
	I1102 14:11:42.478200  477405 cri.go:89] found id: "e206f6558a61d96d9da578d9c25b7fe05996f143a18ece7b8dadbb7f9822b039"
	I1102 14:11:42.478204  477405 cri.go:89] found id: "05ea50ffb17a2c17ae3167c803ef030a793314febf71bcbb6e67e304616db8d1"
	I1102 14:11:42.478208  477405 cri.go:89] found id: "b4d30d38fb366868b37ff296a262960cad053ff1896f9efd34edc961fdef64cc"
	I1102 14:11:42.478212  477405 cri.go:89] found id: "e3009eb696e3a3f606e23a07d321eac52a28e08956e2979dcb8e66d449bc6d55"
	I1102 14:11:42.478216  477405 cri.go:89] found id: "6e8cd97cdf9ad0306b693f6b2aff921d70b038a810142c363024afddee477af3"
	I1102 14:11:42.478244  477405 cri.go:89] found id: "7cbe729897f5594064f2041cb37d3a309e248b10de54a0077bb7ab8a192cbf98"
	I1102 14:11:42.478249  477405 cri.go:89] found id: "318072e43c5b6485705d146666c86c912d3dc14570e9880f1b2c467090c6391b"
	I1102 14:11:42.478255  477405 cri.go:89] found id: "0715d806febbf6bc068ca3b10c928c40a83c034a3efe46db9df4e13ebd053192"
	I1102 14:11:42.478273  477405 cri.go:89] found id: "4e56dc5991fa15eba86cd1282a7f01dec8198efa8716d7d7b2cdc4ef02f81353"
	I1102 14:11:42.478280  477405 cri.go:89] found id: ""
	I1102 14:11:42.478341  477405 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 14:11:42.493177  477405 out.go:203] 
	W1102 14:11:42.496142  477405 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T14:11:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T14:11:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 14:11:42.496171  477405 out.go:285] * 
	* 
	W1102 14:11:42.503521  477405 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 14:11:42.506409  477405 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p old-k8s-version-873713 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-873713
helpers_test.go:243: (dbg) docker inspect old-k8s-version-873713:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4ee7404b4a6ac58ace4f7c3005215815a212bd879029eed27a35560a4c57fe56",
	        "Created": "2025-11-02T14:09:16.892897675Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 475055,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-02T14:10:36.422155539Z",
	            "FinishedAt": "2025-11-02T14:10:35.574926166Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/4ee7404b4a6ac58ace4f7c3005215815a212bd879029eed27a35560a4c57fe56/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4ee7404b4a6ac58ace4f7c3005215815a212bd879029eed27a35560a4c57fe56/hostname",
	        "HostsPath": "/var/lib/docker/containers/4ee7404b4a6ac58ace4f7c3005215815a212bd879029eed27a35560a4c57fe56/hosts",
	        "LogPath": "/var/lib/docker/containers/4ee7404b4a6ac58ace4f7c3005215815a212bd879029eed27a35560a4c57fe56/4ee7404b4a6ac58ace4f7c3005215815a212bd879029eed27a35560a4c57fe56-json.log",
	        "Name": "/old-k8s-version-873713",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-873713:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-873713",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4ee7404b4a6ac58ace4f7c3005215815a212bd879029eed27a35560a4c57fe56",
	                "LowerDir": "/var/lib/docker/overlay2/02fe55438eff7f4b248d251e0fd41254d206cd6322b4309218b237305b27175b-init/diff:/var/lib/docker/overlay2/53058afe6acd7391f639f551e07f446f6a39a991b699aea18507cb21a1652e97/diff",
	                "MergedDir": "/var/lib/docker/overlay2/02fe55438eff7f4b248d251e0fd41254d206cd6322b4309218b237305b27175b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/02fe55438eff7f4b248d251e0fd41254d206cd6322b4309218b237305b27175b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/02fe55438eff7f4b248d251e0fd41254d206cd6322b4309218b237305b27175b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-873713",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-873713/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-873713",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-873713",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-873713",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3e4e1d32b42ecf0f5521ef60ba7e5a1f4aef5d6aa9f6642bc0d3fc476421e0bd",
	            "SandboxKey": "/var/run/docker/netns/3e4e1d32b42e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-873713": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:2c:cc:f1:06:0c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "174273846e47bcb425298d38a31d82d3ed621bb4662ffd28cfa6393ea0333640",
	                    "EndpointID": "3160745528b011c6615aaab139e2d0ef3c8631063ef0a430a354b21021ff72a7",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-873713",
	                        "4ee7404b4a6a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-873713 -n old-k8s-version-873713
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-873713 -n old-k8s-version-873713: exit status 2 (344.006507ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-873713 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-873713 logs -n 25: (1.322142627s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-143736 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo containerd config dump                                                                                                                                                                                                  │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo crio config                                                                                                                                                                                                             │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ delete  │ -p cilium-143736                                                                                                                                                                                                                              │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │ 02 Nov 25 14:07 UTC │
	│ start   │ -p force-systemd-env-263133 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-263133 │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │ 02 Nov 25 14:08 UTC │
	│ delete  │ -p pause-061518                                                                                                                                                                                                                               │ pause-061518             │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │ 02 Nov 25 14:07 UTC │
	│ start   │ -p cert-expiration-114321 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-114321   │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │ 02 Nov 25 14:08 UTC │
	│ delete  │ -p force-systemd-env-263133                                                                                                                                                                                                                   │ force-systemd-env-263133 │ jenkins │ v1.37.0 │ 02 Nov 25 14:08 UTC │ 02 Nov 25 14:08 UTC │
	│ start   │ -p cert-options-935084 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-935084      │ jenkins │ v1.37.0 │ 02 Nov 25 14:08 UTC │ 02 Nov 25 14:09 UTC │
	│ ssh     │ cert-options-935084 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-935084      │ jenkins │ v1.37.0 │ 02 Nov 25 14:09 UTC │ 02 Nov 25 14:09 UTC │
	│ ssh     │ -p cert-options-935084 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-935084      │ jenkins │ v1.37.0 │ 02 Nov 25 14:09 UTC │ 02 Nov 25 14:09 UTC │
	│ delete  │ -p cert-options-935084                                                                                                                                                                                                                        │ cert-options-935084      │ jenkins │ v1.37.0 │ 02 Nov 25 14:09 UTC │ 02 Nov 25 14:09 UTC │
	│ start   │ -p old-k8s-version-873713 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:09 UTC │ 02 Nov 25 14:10 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-873713 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:10 UTC │                     │
	│ stop    │ -p old-k8s-version-873713 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:10 UTC │ 02 Nov 25 14:10 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-873713 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:10 UTC │ 02 Nov 25 14:10 UTC │
	│ start   │ -p old-k8s-version-873713 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:10 UTC │ 02 Nov 25 14:11 UTC │
	│ start   │ -p cert-expiration-114321 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-114321   │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │                     │
	│ image   │ old-k8s-version-873713 image list --format=json                                                                                                                                                                                               │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │ 02 Nov 25 14:11 UTC │
	│ pause   │ -p old-k8s-version-873713 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 14:11:36
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 14:11:36.882087  477134 out.go:360] Setting OutFile to fd 1 ...
	I1102 14:11:36.882177  477134 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:11:36.882181  477134 out.go:374] Setting ErrFile to fd 2...
	I1102 14:11:36.882184  477134 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:11:36.882550  477134 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 14:11:36.883026  477134 out.go:368] Setting JSON to false
	I1102 14:11:36.884196  477134 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10449,"bootTime":1762082248,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1102 14:11:36.884255  477134 start.go:143] virtualization:  
	I1102 14:11:36.892377  477134 out.go:179] * [cert-expiration-114321] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1102 14:11:36.899059  477134 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 14:11:36.899244  477134 notify.go:221] Checking for updates...
	I1102 14:11:36.906600  477134 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 14:11:36.910061  477134 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-293314/kubeconfig
	I1102 14:11:36.913471  477134 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-293314/.minikube
	I1102 14:11:36.917093  477134 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1102 14:11:36.920235  477134 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 14:11:36.923986  477134 config.go:182] Loaded profile config "cert-expiration-114321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:11:36.924533  477134 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 14:11:36.956312  477134 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1102 14:11:36.956414  477134 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:11:37.027127  477134 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-02 14:11:37.015128137 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:11:37.027263  477134 docker.go:319] overlay module found
	I1102 14:11:37.031137  477134 out.go:179] * Using the docker driver based on existing profile
	I1102 14:11:37.034244  477134 start.go:309] selected driver: docker
	I1102 14:11:37.034257  477134 start.go:930] validating driver "docker" against &{Name:cert-expiration-114321 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-114321 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 14:11:37.034355  477134 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 14:11:37.035245  477134 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:11:37.115575  477134 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-02 14:11:37.105624 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:11:37.115881  477134 cni.go:84] Creating CNI manager for ""
	I1102 14:11:37.115945  477134 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 14:11:37.115987  477134 start.go:353] cluster config:
	{Name:cert-expiration-114321 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-114321 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1102 14:11:37.119456  477134 out.go:179] * Starting "cert-expiration-114321" primary control-plane node in "cert-expiration-114321" cluster
	I1102 14:11:37.122408  477134 cache.go:124] Beginning downloading kic base image for docker with crio
	I1102 14:11:37.125390  477134 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1102 14:11:37.128318  477134 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 14:11:37.128401  477134 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1102 14:11:37.128410  477134 cache.go:59] Caching tarball of preloaded images
	I1102 14:11:37.128515  477134 preload.go:233] Found /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1102 14:11:37.128524  477134 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1102 14:11:37.128634  477134 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/cert-expiration-114321/config.json ...
	I1102 14:11:37.128897  477134 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1102 14:11:37.149625  477134 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1102 14:11:37.149637  477134 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1102 14:11:37.149664  477134 cache.go:233] Successfully downloaded all kic artifacts
	I1102 14:11:37.149694  477134 start.go:360] acquireMachinesLock for cert-expiration-114321: {Name:mk8d21a5e709ca899ae97668803a3f23795bfc71 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 14:11:37.149759  477134 start.go:364] duration metric: took 49.411µs to acquireMachinesLock for "cert-expiration-114321"
	I1102 14:11:37.149787  477134 start.go:96] Skipping create...Using existing machine configuration
	I1102 14:11:37.149792  477134 fix.go:54] fixHost starting: 
	I1102 14:11:37.150049  477134 cli_runner.go:164] Run: docker container inspect cert-expiration-114321 --format={{.State.Status}}
	I1102 14:11:37.168663  477134 fix.go:112] recreateIfNeeded on cert-expiration-114321: state=Running err=<nil>
	W1102 14:11:37.168684  477134 fix.go:138] unexpected machine state, will restart: <nil>
	I1102 14:11:37.171831  477134 out.go:252] * Updating the running docker "cert-expiration-114321" container ...
	I1102 14:11:37.171860  477134 machine.go:94] provisionDockerMachine start ...
	I1102 14:11:37.171948  477134 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-114321
	I1102 14:11:37.189065  477134 main.go:143] libmachine: Using SSH client type: native
	I1102 14:11:37.189395  477134 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33409 <nil> <nil>}
	I1102 14:11:37.189402  477134 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 14:11:37.342812  477134 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-114321
	
	I1102 14:11:37.342826  477134 ubuntu.go:182] provisioning hostname "cert-expiration-114321"
	I1102 14:11:37.342906  477134 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-114321
	I1102 14:11:37.361507  477134 main.go:143] libmachine: Using SSH client type: native
	I1102 14:11:37.361839  477134 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33409 <nil> <nil>}
	I1102 14:11:37.361849  477134 main.go:143] libmachine: About to run SSH command:
	sudo hostname cert-expiration-114321 && echo "cert-expiration-114321" | sudo tee /etc/hostname
	I1102 14:11:37.529777  477134 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-114321
	
	I1102 14:11:37.529844  477134 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-114321
	I1102 14:11:37.547921  477134 main.go:143] libmachine: Using SSH client type: native
	I1102 14:11:37.548248  477134 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33409 <nil> <nil>}
	I1102 14:11:37.548266  477134 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-114321' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-114321/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-114321' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 14:11:37.699150  477134 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 14:11:37.699166  477134 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-293314/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-293314/.minikube}
	I1102 14:11:37.699184  477134 ubuntu.go:190] setting up certificates
	I1102 14:11:37.699192  477134 provision.go:84] configureAuth start
	I1102 14:11:37.699265  477134 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-114321
	I1102 14:11:37.717209  477134 provision.go:143] copyHostCerts
	I1102 14:11:37.717268  477134 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem, removing ...
	I1102 14:11:37.717276  477134 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem
	I1102 14:11:37.717356  477134 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem (1082 bytes)
	I1102 14:11:37.717451  477134 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem, removing ...
	I1102 14:11:37.717455  477134 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem
	I1102 14:11:37.717495  477134 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem (1123 bytes)
	I1102 14:11:37.717551  477134 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem, removing ...
	I1102 14:11:37.717554  477134 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem
	I1102 14:11:37.717578  477134 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem (1675 bytes)
	I1102 14:11:37.717623  477134 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-114321 san=[127.0.0.1 192.168.85.2 cert-expiration-114321 localhost minikube]
	I1102 14:11:39.000573  477134 provision.go:177] copyRemoteCerts
	I1102 14:11:39.000626  477134 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 14:11:39.000674  477134 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-114321
	I1102 14:11:39.019867  477134 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33409 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/cert-expiration-114321/id_rsa Username:docker}
	I1102 14:11:39.130725  477134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1102 14:11:39.151082  477134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1102 14:11:39.169387  477134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1102 14:11:39.188777  477134 provision.go:87] duration metric: took 1.489560559s to configureAuth
	I1102 14:11:39.188795  477134 ubuntu.go:206] setting minikube options for container-runtime
	I1102 14:11:39.189000  477134 config.go:182] Loaded profile config "cert-expiration-114321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:11:39.189119  477134 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-114321
	I1102 14:11:39.207692  477134 main.go:143] libmachine: Using SSH client type: native
	I1102 14:11:39.208025  477134 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33409 <nil> <nil>}
	I1102 14:11:39.208037  477134 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	
	
	==> CRI-O <==
	Nov 02 14:11:26 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:26.209060529Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c08152c1-c4a7-4ea9-bf67-9400588090f6 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:11:26 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:26.210309145Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9a18fe9d-eb53-4717-9594-0ec1c341a15c name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:11:26 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:26.211551968Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q5mbz/dashboard-metrics-scraper" id=06c2682e-18c9-48fa-befc-cd54ae294fed name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:11:26 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:26.211646951Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:11:26 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:26.218423309Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:11:26 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:26.218992592Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:11:26 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:26.233662739Z" level=info msg="Created container 0715d806febbf6bc068ca3b10c928c40a83c034a3efe46db9df4e13ebd053192: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q5mbz/dashboard-metrics-scraper" id=06c2682e-18c9-48fa-befc-cd54ae294fed name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:11:26 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:26.234943937Z" level=info msg="Starting container: 0715d806febbf6bc068ca3b10c928c40a83c034a3efe46db9df4e13ebd053192" id=4399767f-853e-40b6-835c-335bc99293d8 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 14:11:26 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:26.236744507Z" level=info msg="Started container" PID=1677 containerID=0715d806febbf6bc068ca3b10c928c40a83c034a3efe46db9df4e13ebd053192 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q5mbz/dashboard-metrics-scraper id=4399767f-853e-40b6-835c-335bc99293d8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9a0de88277eb9b4c9ccfaebce0a6fa44b6032df83f3fb80dda4f2ff6903044d2
	Nov 02 14:11:26 old-k8s-version-873713 conmon[1675]: conmon 0715d806febbf6bc068c <ninfo>: container 1677 exited with status 1
	Nov 02 14:11:26 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:26.396448593Z" level=info msg="Removing container: 1f8025db902c9f740662dbe0ba7159079793a6df6b55ec500a4119a1d3ffcec1" id=f6f2a646-9377-4a62-bb10-63bfe0684a09 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 02 14:11:26 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:26.408960744Z" level=info msg="Error loading conmon cgroup of container 1f8025db902c9f740662dbe0ba7159079793a6df6b55ec500a4119a1d3ffcec1: cgroup deleted" id=f6f2a646-9377-4a62-bb10-63bfe0684a09 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 02 14:11:26 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:26.412248085Z" level=info msg="Removed container 1f8025db902c9f740662dbe0ba7159079793a6df6b55ec500a4119a1d3ffcec1: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q5mbz/dashboard-metrics-scraper" id=f6f2a646-9377-4a62-bb10-63bfe0684a09 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 02 14:11:31 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:31.916757367Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:11:31 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:31.92420797Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:11:31 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:31.924249537Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 14:11:31 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:31.924273734Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:11:31 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:31.927697832Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:11:31 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:31.927729454Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 14:11:31 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:31.927756187Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:11:31 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:31.931188071Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:11:31 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:31.931269671Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 14:11:31 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:31.931294952Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:11:31 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:31.935070542Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:11:31 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:31.935105652Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	0715d806febbf       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           17 seconds ago      Exited              dashboard-metrics-scraper   2                   9a0de88277eb9       dashboard-metrics-scraper-5f989dc9cf-q5mbz       kubernetes-dashboard
	6cf1293ad6071       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           21 seconds ago      Running             storage-provisioner         2                   add8af5045fc6       storage-provisioner                              kube-system
	4e56dc5991fa1       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   31 seconds ago      Running             kubernetes-dashboard        0                   bee2b678c62e3       kubernetes-dashboard-8694d4445c-7nd7h            kubernetes-dashboard
	6ce84e7ad4fc6       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago      Running             busybox                     1                   bfe30e9786dc5       busybox                                          default
	0568c5f06769d       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           51 seconds ago      Running             coredns                     1                   09dda0eb13480       coredns-5dd5756b68-hjsnd                         kube-system
	e206f6558a61d       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           51 seconds ago      Running             kube-proxy                  1                   b60a59c67e592       kube-proxy-ppcp5                                 kube-system
	05ea50ffb17a2       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           51 seconds ago      Exited              storage-provisioner         1                   add8af5045fc6       storage-provisioner                              kube-system
	b4d30d38fb366       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           52 seconds ago      Running             kindnet-cni                 1                   295e37993186c       kindnet-d876b                                    kube-system
	e3009eb696e3a       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           58 seconds ago      Running             etcd                        1                   f8eec2dda1024       etcd-old-k8s-version-873713                      kube-system
	6e8cd97cdf9ad       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           58 seconds ago      Running             kube-apiserver              1                   a2b9a6245d5f1       kube-apiserver-old-k8s-version-873713            kube-system
	7cbe729897f55       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           58 seconds ago      Running             kube-scheduler              1                   cb24b6567f7d4       kube-scheduler-old-k8s-version-873713            kube-system
	318072e43c5b6       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           58 seconds ago      Running             kube-controller-manager     1                   bc3d19fda648a       kube-controller-manager-old-k8s-version-873713   kube-system
	
	
	==> coredns [0568c5f06769d347955fc5d3451f1eac160d343349b9500cbd4c89a18f6916ca] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55161 - 65431 "HINFO IN 8995529865261232737.7691018843427380614. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.004797995s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-873713
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-873713
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a
	                    minikube.k8s.io/name=old-k8s-version-873713
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_02T14_09_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 02 Nov 2025 14:09:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-873713
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 02 Nov 2025 14:11:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 02 Nov 2025 14:11:21 +0000   Sun, 02 Nov 2025 14:09:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 02 Nov 2025 14:11:21 +0000   Sun, 02 Nov 2025 14:09:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 02 Nov 2025 14:11:21 +0000   Sun, 02 Nov 2025 14:09:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 02 Nov 2025 14:11:21 +0000   Sun, 02 Nov 2025 14:10:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-873713
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                316cad18-282c-471f-8314-7b2e61711c14
	  Boot ID:                    4c303cf4-c0b6-4c0d-aa1a-d7509feb15e7
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-5dd5756b68-hjsnd                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     108s
	  kube-system                 etcd-old-k8s-version-873713                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m3s
	  kube-system                 kindnet-d876b                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-old-k8s-version-873713             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-controller-manager-old-k8s-version-873713    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-ppcp5                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-old-k8s-version-873713             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-q5mbz        0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-7nd7h             0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 107s               kube-proxy       
	  Normal  Starting                 51s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m1s               kubelet          Node old-k8s-version-873713 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s               kubelet          Node old-k8s-version-873713 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m1s               kubelet          Node old-k8s-version-873713 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m1s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s               node-controller  Node old-k8s-version-873713 event: Registered Node old-k8s-version-873713 in Controller
	  Normal  NodeReady                94s                kubelet          Node old-k8s-version-873713 status is now: NodeReady
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)  kubelet          Node old-k8s-version-873713 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)  kubelet          Node old-k8s-version-873713 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)  kubelet          Node old-k8s-version-873713 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           41s                node-controller  Node old-k8s-version-873713 event: Registered Node old-k8s-version-873713 in Controller
	
	
	==> dmesg <==
	[Nov 2 13:49] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:50] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:51] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:52] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:54] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:55] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:56] overlayfs: idmapped layers are currently not supported
	[  +3.515963] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:57] overlayfs: idmapped layers are currently not supported
	[ +24.836033] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:58] overlayfs: idmapped layers are currently not supported
	[ +23.362553] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:59] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:01] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:02] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:03] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:04] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:06] overlayfs: idmapped layers are currently not supported
	[ +50.469589] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 2 14:07] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:08] overlayfs: idmapped layers are currently not supported
	[ +11.089512] overlayfs: idmapped layers are currently not supported
	[ +33.821233] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:09] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:10] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e3009eb696e3a3f606e23a07d321eac52a28e08956e2979dcb8e66d449bc6d55] <==
	{"level":"info","ts":"2025-11-02T14:10:45.471308Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-02T14:10:45.471358Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-02T14:10:45.471607Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-11-02T14:10:45.471699Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-02T14:10:45.471856Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-02T14:10:45.471917Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-02T14:10:45.478366Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-02T14:10:45.480872Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-02T14:10:45.482135Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-02T14:10:45.481314Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-02T14:10:45.48246Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-02T14:10:46.520717Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-02T14:10:46.52084Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-02T14:10:46.520895Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-02T14:10:46.52095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-11-02T14:10:46.520981Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-02T14:10:46.521019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-11-02T14:10:46.521052Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-02T14:10:46.524444Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-873713 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-02T14:10:46.524526Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-02T14:10:46.525496Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-02T14:10:46.525742Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-02T14:10:46.526577Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-02T14:10:46.527061Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-02T14:10:46.527111Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 14:11:43 up  2:54,  0 user,  load average: 2.04, 3.20, 2.77
	Linux old-k8s-version-873713 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b4d30d38fb366868b37ff296a262960cad053ff1896f9efd34edc961fdef64cc] <==
	I1102 14:10:51.631939       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1102 14:10:51.710859       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1102 14:10:51.711042       1 main.go:148] setting mtu 1500 for CNI 
	I1102 14:10:51.711056       1 main.go:178] kindnetd IP family: "ipv4"
	I1102 14:10:51.711069       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-02T14:10:51Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1102 14:10:51.912949       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1102 14:10:51.913032       1 controller.go:381] "Waiting for informer caches to sync"
	I1102 14:10:51.913079       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1102 14:10:51.913805       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1102 14:11:21.917523       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1102 14:11:21.917667       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1102 14:11:21.917783       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1102 14:11:21.917941       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1102 14:11:23.514091       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1102 14:11:23.514126       1 metrics.go:72] Registering metrics
	I1102 14:11:23.514207       1 controller.go:711] "Syncing nftables rules"
	I1102 14:11:31.916376       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1102 14:11:31.916466       1 main.go:301] handling current node
	I1102 14:11:41.919923       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1102 14:11:41.919953       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6e8cd97cdf9ad0306b693f6b2aff921d70b038a810142c363024afddee477af3] <==
	I1102 14:10:50.211194       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1102 14:10:50.227648       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1102 14:10:50.229551       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1102 14:10:50.229574       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1102 14:10:50.229764       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1102 14:10:50.236208       1 shared_informer.go:318] Caches are synced for configmaps
	I1102 14:10:50.236275       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1102 14:10:50.242172       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1102 14:10:50.248928       1 aggregator.go:166] initial CRD sync complete...
	I1102 14:10:50.248958       1 autoregister_controller.go:141] Starting autoregister controller
	I1102 14:10:50.248974       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1102 14:10:50.248981       1 cache.go:39] Caches are synced for autoregister controller
	I1102 14:10:50.272375       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1102 14:10:50.367521       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1102 14:10:50.931654       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1102 14:10:52.030361       1 controller.go:624] quota admission added evaluator for: namespaces
	I1102 14:10:52.085213       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1102 14:10:52.117206       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1102 14:10:52.130398       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1102 14:10:52.140994       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1102 14:10:52.242651       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.231.184"}
	I1102 14:10:52.313802       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.154.18"}
	I1102 14:11:02.626825       1 controller.go:624] quota admission added evaluator for: endpoints
	I1102 14:11:02.656827       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1102 14:11:02.743044       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [318072e43c5b6485705d146666c86c912d3dc14570e9880f1b2c467090c6391b] <==
	I1102 14:11:02.752771       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1102 14:11:02.757272       1 shared_informer.go:318] Caches are synced for service account
	I1102 14:11:02.769935       1 shared_informer.go:318] Caches are synced for resource quota
	I1102 14:11:02.770503       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-q5mbz"
	I1102 14:11:02.779838       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-7nd7h"
	I1102 14:11:02.791610       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="44.921206ms"
	I1102 14:11:02.795258       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="42.687329ms"
	I1102 14:11:02.814676       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="22.211351ms"
	I1102 14:11:02.817208       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="69.76µs"
	I1102 14:11:02.824483       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="100.374µs"
	I1102 14:11:02.843707       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="48.237773ms"
	I1102 14:11:02.843810       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="65.855µs"
	I1102 14:11:02.848851       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="99.521µs"
	I1102 14:11:03.182054       1 shared_informer.go:318] Caches are synced for garbage collector
	I1102 14:11:03.224609       1 shared_informer.go:318] Caches are synced for garbage collector
	I1102 14:11:03.224650       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1102 14:11:08.351733       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="78.54µs"
	I1102 14:11:09.366344       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.802µs"
	I1102 14:11:12.383223       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="16.931967ms"
	I1102 14:11:12.383676       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="74.552µs"
	I1102 14:11:13.107188       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="98.085µs"
	I1102 14:11:26.413589       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.819µs"
	I1102 14:11:26.967204       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.119965ms"
	I1102 14:11:26.967388       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="52.226µs"
	I1102 14:11:33.118098       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.968µs"
	
	
	==> kube-proxy [e206f6558a61d96d9da578d9c25b7fe05996f143a18ece7b8dadbb7f9822b039] <==
	I1102 14:10:51.805627       1 server_others.go:69] "Using iptables proxy"
	I1102 14:10:51.834857       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1102 14:10:51.874909       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 14:10:51.876587       1 server_others.go:152] "Using iptables Proxier"
	I1102 14:10:51.876618       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1102 14:10:51.876632       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1102 14:10:51.876659       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1102 14:10:51.876883       1 server.go:846] "Version info" version="v1.28.0"
	I1102 14:10:51.876892       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 14:10:51.880888       1 config.go:188] "Starting service config controller"
	I1102 14:10:51.880904       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1102 14:10:51.880921       1 config.go:97] "Starting endpoint slice config controller"
	I1102 14:10:51.880925       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1102 14:10:51.881286       1 config.go:315] "Starting node config controller"
	I1102 14:10:51.881292       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1102 14:10:51.981038       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1102 14:10:51.981056       1 shared_informer.go:318] Caches are synced for service config
	I1102 14:10:51.981367       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [7cbe729897f5594064f2041cb37d3a309e248b10de54a0077bb7ab8a192cbf98] <==
	I1102 14:10:48.050634       1 serving.go:348] Generated self-signed cert in-memory
	W1102 14:10:50.242710       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1102 14:10:50.242785       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1102 14:10:50.242819       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1102 14:10:50.242849       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1102 14:10:50.305196       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1102 14:10:50.305353       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 14:10:50.306969       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 14:10:50.307065       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1102 14:10:50.308376       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1102 14:10:50.308464       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1102 14:10:50.408454       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 02 14:11:02 old-k8s-version-873713 kubelet[808]: I1102 14:11:02.796350     808 topology_manager.go:215] "Topology Admit Handler" podUID="176cf84b-bc2d-4f64-9bd0-b6375d4daaa5" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-7nd7h"
	Nov 02 14:11:02 old-k8s-version-873713 kubelet[808]: I1102 14:11:02.949240     808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp8pd\" (UniqueName: \"kubernetes.io/projected/55262db6-6c5b-467f-b999-b64511792d4d-kube-api-access-mp8pd\") pod \"dashboard-metrics-scraper-5f989dc9cf-q5mbz\" (UID: \"55262db6-6c5b-467f-b999-b64511792d4d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q5mbz"
	Nov 02 14:11:02 old-k8s-version-873713 kubelet[808]: I1102 14:11:02.949304     808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/176cf84b-bc2d-4f64-9bd0-b6375d4daaa5-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-7nd7h\" (UID: \"176cf84b-bc2d-4f64-9bd0-b6375d4daaa5\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7nd7h"
	Nov 02 14:11:02 old-k8s-version-873713 kubelet[808]: I1102 14:11:02.949336     808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w25cd\" (UniqueName: \"kubernetes.io/projected/176cf84b-bc2d-4f64-9bd0-b6375d4daaa5-kube-api-access-w25cd\") pod \"kubernetes-dashboard-8694d4445c-7nd7h\" (UID: \"176cf84b-bc2d-4f64-9bd0-b6375d4daaa5\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7nd7h"
	Nov 02 14:11:02 old-k8s-version-873713 kubelet[808]: I1102 14:11:02.949363     808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/55262db6-6c5b-467f-b999-b64511792d4d-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-q5mbz\" (UID: \"55262db6-6c5b-467f-b999-b64511792d4d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q5mbz"
	Nov 02 14:11:03 old-k8s-version-873713 kubelet[808]: W1102 14:11:03.115412     808 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/4ee7404b4a6ac58ace4f7c3005215815a212bd879029eed27a35560a4c57fe56/crio-9a0de88277eb9b4c9ccfaebce0a6fa44b6032df83f3fb80dda4f2ff6903044d2 WatchSource:0}: Error finding container 9a0de88277eb9b4c9ccfaebce0a6fa44b6032df83f3fb80dda4f2ff6903044d2: Status 404 returned error can't find the container with id 9a0de88277eb9b4c9ccfaebce0a6fa44b6032df83f3fb80dda4f2ff6903044d2
	Nov 02 14:11:03 old-k8s-version-873713 kubelet[808]: W1102 14:11:03.135185     808 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/4ee7404b4a6ac58ace4f7c3005215815a212bd879029eed27a35560a4c57fe56/crio-bee2b678c62e310e65447227f479e583a2a9bd6d3c38b1b728f65575d51230e0 WatchSource:0}: Error finding container bee2b678c62e310e65447227f479e583a2a9bd6d3c38b1b728f65575d51230e0: Status 404 returned error can't find the container with id bee2b678c62e310e65447227f479e583a2a9bd6d3c38b1b728f65575d51230e0
	Nov 02 14:11:08 old-k8s-version-873713 kubelet[808]: I1102 14:11:08.338318     808 scope.go:117] "RemoveContainer" containerID="1fa080268a3b1c567ca1e56bc4fdc6320572ba34379312b8723d90bd5921fc0e"
	Nov 02 14:11:09 old-k8s-version-873713 kubelet[808]: I1102 14:11:09.342564     808 scope.go:117] "RemoveContainer" containerID="1f8025db902c9f740662dbe0ba7159079793a6df6b55ec500a4119a1d3ffcec1"
	Nov 02 14:11:09 old-k8s-version-873713 kubelet[808]: E1102 14:11:09.342964     808 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-q5mbz_kubernetes-dashboard(55262db6-6c5b-467f-b999-b64511792d4d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q5mbz" podUID="55262db6-6c5b-467f-b999-b64511792d4d"
	Nov 02 14:11:09 old-k8s-version-873713 kubelet[808]: I1102 14:11:09.344307     808 scope.go:117] "RemoveContainer" containerID="1fa080268a3b1c567ca1e56bc4fdc6320572ba34379312b8723d90bd5921fc0e"
	Nov 02 14:11:13 old-k8s-version-873713 kubelet[808]: I1102 14:11:13.092266     808 scope.go:117] "RemoveContainer" containerID="1f8025db902c9f740662dbe0ba7159079793a6df6b55ec500a4119a1d3ffcec1"
	Nov 02 14:11:13 old-k8s-version-873713 kubelet[808]: E1102 14:11:13.093079     808 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-q5mbz_kubernetes-dashboard(55262db6-6c5b-467f-b999-b64511792d4d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q5mbz" podUID="55262db6-6c5b-467f-b999-b64511792d4d"
	Nov 02 14:11:13 old-k8s-version-873713 kubelet[808]: I1102 14:11:13.109785     808 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7nd7h" podStartSLOduration=2.383890499 podCreationTimestamp="2025-11-02 14:11:02 +0000 UTC" firstStartedPulling="2025-11-02 14:11:03.143717523 +0000 UTC m=+19.216347886" lastFinishedPulling="2025-11-02 14:11:11.866249977 +0000 UTC m=+27.938880340" observedRunningTime="2025-11-02 14:11:12.367387172 +0000 UTC m=+28.440017526" watchObservedRunningTime="2025-11-02 14:11:13.106422953 +0000 UTC m=+29.179053316"
	Nov 02 14:11:22 old-k8s-version-873713 kubelet[808]: I1102 14:11:22.379136     808 scope.go:117] "RemoveContainer" containerID="05ea50ffb17a2c17ae3167c803ef030a793314febf71bcbb6e67e304616db8d1"
	Nov 02 14:11:26 old-k8s-version-873713 kubelet[808]: I1102 14:11:26.207858     808 scope.go:117] "RemoveContainer" containerID="1f8025db902c9f740662dbe0ba7159079793a6df6b55ec500a4119a1d3ffcec1"
	Nov 02 14:11:26 old-k8s-version-873713 kubelet[808]: I1102 14:11:26.392504     808 scope.go:117] "RemoveContainer" containerID="1f8025db902c9f740662dbe0ba7159079793a6df6b55ec500a4119a1d3ffcec1"
	Nov 02 14:11:26 old-k8s-version-873713 kubelet[808]: I1102 14:11:26.392719     808 scope.go:117] "RemoveContainer" containerID="0715d806febbf6bc068ca3b10c928c40a83c034a3efe46db9df4e13ebd053192"
	Nov 02 14:11:26 old-k8s-version-873713 kubelet[808]: E1102 14:11:26.393051     808 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-q5mbz_kubernetes-dashboard(55262db6-6c5b-467f-b999-b64511792d4d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q5mbz" podUID="55262db6-6c5b-467f-b999-b64511792d4d"
	Nov 02 14:11:33 old-k8s-version-873713 kubelet[808]: I1102 14:11:33.092486     808 scope.go:117] "RemoveContainer" containerID="0715d806febbf6bc068ca3b10c928c40a83c034a3efe46db9df4e13ebd053192"
	Nov 02 14:11:33 old-k8s-version-873713 kubelet[808]: E1102 14:11:33.093309     808 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-q5mbz_kubernetes-dashboard(55262db6-6c5b-467f-b999-b64511792d4d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q5mbz" podUID="55262db6-6c5b-467f-b999-b64511792d4d"
	Nov 02 14:11:41 old-k8s-version-873713 kubelet[808]: I1102 14:11:41.179100     808 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 02 14:11:41 old-k8s-version-873713 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 02 14:11:41 old-k8s-version-873713 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 02 14:11:41 old-k8s-version-873713 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [4e56dc5991fa15eba86cd1282a7f01dec8198efa8716d7d7b2cdc4ef02f81353] <==
	2025/11/02 14:11:11 Starting overwatch
	2025/11/02 14:11:11 Using namespace: kubernetes-dashboard
	2025/11/02 14:11:11 Using in-cluster config to connect to apiserver
	2025/11/02 14:11:11 Using secret token for csrf signing
	2025/11/02 14:11:11 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/02 14:11:11 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/02 14:11:11 Successful initial request to the apiserver, version: v1.28.0
	2025/11/02 14:11:11 Generating JWE encryption key
	2025/11/02 14:11:11 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/02 14:11:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/02 14:11:12 Initializing JWE encryption key from synchronized object
	2025/11/02 14:11:12 Creating in-cluster Sidecar client
	2025/11/02 14:11:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/02 14:11:12 Serving insecurely on HTTP port: 9090
	2025/11/02 14:11:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [05ea50ffb17a2c17ae3167c803ef030a793314febf71bcbb6e67e304616db8d1] <==
	I1102 14:10:51.712009       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1102 14:11:21.714295       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [6cf1293ad607139235172951e0f6dd2839ab7dfdb284bac7be5c819c3ae0637b] <==
	I1102 14:11:22.430524       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1102 14:11:22.446879       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1102 14:11:22.447013       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1102 14:11:39.845192       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1102 14:11:39.845356       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-873713_3c8ee00c-08cb-44df-9291-4bdbe3eba8a9!
	I1102 14:11:39.852015       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bed908b3-2f1a-4e3a-8d32-4d7ab52fe965", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-873713_3c8ee00c-08cb-44df-9291-4bdbe3eba8a9 became leader
	I1102 14:11:39.946020       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-873713_3c8ee00c-08cb-44df-9291-4bdbe3eba8a9!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-873713 -n old-k8s-version-873713
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-873713 -n old-k8s-version-873713: exit status 2 (408.458907ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-873713 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-873713
helpers_test.go:243: (dbg) docker inspect old-k8s-version-873713:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4ee7404b4a6ac58ace4f7c3005215815a212bd879029eed27a35560a4c57fe56",
	        "Created": "2025-11-02T14:09:16.892897675Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 475055,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-02T14:10:36.422155539Z",
	            "FinishedAt": "2025-11-02T14:10:35.574926166Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/4ee7404b4a6ac58ace4f7c3005215815a212bd879029eed27a35560a4c57fe56/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4ee7404b4a6ac58ace4f7c3005215815a212bd879029eed27a35560a4c57fe56/hostname",
	        "HostsPath": "/var/lib/docker/containers/4ee7404b4a6ac58ace4f7c3005215815a212bd879029eed27a35560a4c57fe56/hosts",
	        "LogPath": "/var/lib/docker/containers/4ee7404b4a6ac58ace4f7c3005215815a212bd879029eed27a35560a4c57fe56/4ee7404b4a6ac58ace4f7c3005215815a212bd879029eed27a35560a4c57fe56-json.log",
	        "Name": "/old-k8s-version-873713",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-873713:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-873713",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4ee7404b4a6ac58ace4f7c3005215815a212bd879029eed27a35560a4c57fe56",
	                "LowerDir": "/var/lib/docker/overlay2/02fe55438eff7f4b248d251e0fd41254d206cd6322b4309218b237305b27175b-init/diff:/var/lib/docker/overlay2/53058afe6acd7391f639f551e07f446f6a39a991b699aea18507cb21a1652e97/diff",
	                "MergedDir": "/var/lib/docker/overlay2/02fe55438eff7f4b248d251e0fd41254d206cd6322b4309218b237305b27175b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/02fe55438eff7f4b248d251e0fd41254d206cd6322b4309218b237305b27175b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/02fe55438eff7f4b248d251e0fd41254d206cd6322b4309218b237305b27175b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-873713",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-873713/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-873713",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-873713",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-873713",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3e4e1d32b42ecf0f5521ef60ba7e5a1f4aef5d6aa9f6642bc0d3fc476421e0bd",
	            "SandboxKey": "/var/run/docker/netns/3e4e1d32b42e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33426"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33427"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-873713": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:2c:cc:f1:06:0c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "174273846e47bcb425298d38a31d82d3ed621bb4662ffd28cfa6393ea0333640",
	                    "EndpointID": "3160745528b011c6615aaab139e2d0ef3c8631063ef0a430a354b21021ff72a7",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-873713",
	                        "4ee7404b4a6a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-873713 -n old-k8s-version-873713
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-873713 -n old-k8s-version-873713: exit status 2 (528.202809ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-873713 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-873713 logs -n 25: (1.625267121s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-143736 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo containerd config dump                                                                                                                                                                                                  │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo crio config                                                                                                                                                                                                             │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ delete  │ -p cilium-143736                                                                                                                                                                                                                              │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │ 02 Nov 25 14:07 UTC │
	│ start   │ -p force-systemd-env-263133 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-263133 │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │ 02 Nov 25 14:08 UTC │
	│ delete  │ -p pause-061518                                                                                                                                                                                                                               │ pause-061518             │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │ 02 Nov 25 14:07 UTC │
	│ start   │ -p cert-expiration-114321 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-114321   │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │ 02 Nov 25 14:08 UTC │
	│ delete  │ -p force-systemd-env-263133                                                                                                                                                                                                                   │ force-systemd-env-263133 │ jenkins │ v1.37.0 │ 02 Nov 25 14:08 UTC │ 02 Nov 25 14:08 UTC │
	│ start   │ -p cert-options-935084 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-935084      │ jenkins │ v1.37.0 │ 02 Nov 25 14:08 UTC │ 02 Nov 25 14:09 UTC │
	│ ssh     │ cert-options-935084 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-935084      │ jenkins │ v1.37.0 │ 02 Nov 25 14:09 UTC │ 02 Nov 25 14:09 UTC │
	│ ssh     │ -p cert-options-935084 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-935084      │ jenkins │ v1.37.0 │ 02 Nov 25 14:09 UTC │ 02 Nov 25 14:09 UTC │
	│ delete  │ -p cert-options-935084                                                                                                                                                                                                                        │ cert-options-935084      │ jenkins │ v1.37.0 │ 02 Nov 25 14:09 UTC │ 02 Nov 25 14:09 UTC │
	│ start   │ -p old-k8s-version-873713 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:09 UTC │ 02 Nov 25 14:10 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-873713 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:10 UTC │                     │
	│ stop    │ -p old-k8s-version-873713 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:10 UTC │ 02 Nov 25 14:10 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-873713 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:10 UTC │ 02 Nov 25 14:10 UTC │
	│ start   │ -p old-k8s-version-873713 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:10 UTC │ 02 Nov 25 14:11 UTC │
	│ start   │ -p cert-expiration-114321 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-114321   │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │                     │
	│ image   │ old-k8s-version-873713 image list --format=json                                                                                                                                                                                               │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │ 02 Nov 25 14:11 UTC │
	│ pause   │ -p old-k8s-version-873713 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 14:11:36
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 14:11:36.882087  477134 out.go:360] Setting OutFile to fd 1 ...
	I1102 14:11:36.882177  477134 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:11:36.882181  477134 out.go:374] Setting ErrFile to fd 2...
	I1102 14:11:36.882184  477134 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:11:36.882550  477134 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 14:11:36.883026  477134 out.go:368] Setting JSON to false
	I1102 14:11:36.884196  477134 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10449,"bootTime":1762082248,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1102 14:11:36.884255  477134 start.go:143] virtualization:  
	I1102 14:11:36.892377  477134 out.go:179] * [cert-expiration-114321] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1102 14:11:36.899059  477134 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 14:11:36.899244  477134 notify.go:221] Checking for updates...
	I1102 14:11:36.906600  477134 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 14:11:36.910061  477134 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-293314/kubeconfig
	I1102 14:11:36.913471  477134 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-293314/.minikube
	I1102 14:11:36.917093  477134 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1102 14:11:36.920235  477134 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 14:11:36.923986  477134 config.go:182] Loaded profile config "cert-expiration-114321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:11:36.924533  477134 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 14:11:36.956312  477134 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1102 14:11:36.956414  477134 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:11:37.027127  477134 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-02 14:11:37.015128137 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:11:37.027263  477134 docker.go:319] overlay module found
	I1102 14:11:37.031137  477134 out.go:179] * Using the docker driver based on existing profile
	I1102 14:11:37.034244  477134 start.go:309] selected driver: docker
	I1102 14:11:37.034257  477134 start.go:930] validating driver "docker" against &{Name:cert-expiration-114321 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-114321 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 14:11:37.034355  477134 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 14:11:37.035245  477134 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:11:37.115575  477134 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-02 14:11:37.105624 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:11:37.115881  477134 cni.go:84] Creating CNI manager for ""
	I1102 14:11:37.115945  477134 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 14:11:37.115987  477134 start.go:353] cluster config:
	{Name:cert-expiration-114321 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:cert-expiration-114321 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:8760h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1102 14:11:37.119456  477134 out.go:179] * Starting "cert-expiration-114321" primary control-plane node in "cert-expiration-114321" cluster
	I1102 14:11:37.122408  477134 cache.go:124] Beginning downloading kic base image for docker with crio
	I1102 14:11:37.125390  477134 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1102 14:11:37.128318  477134 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 14:11:37.128401  477134 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1102 14:11:37.128410  477134 cache.go:59] Caching tarball of preloaded images
	I1102 14:11:37.128515  477134 preload.go:233] Found /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1102 14:11:37.128524  477134 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1102 14:11:37.128634  477134 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/cert-expiration-114321/config.json ...
	I1102 14:11:37.128897  477134 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1102 14:11:37.149625  477134 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1102 14:11:37.149637  477134 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1102 14:11:37.149664  477134 cache.go:233] Successfully downloaded all kic artifacts
	I1102 14:11:37.149694  477134 start.go:360] acquireMachinesLock for cert-expiration-114321: {Name:mk8d21a5e709ca899ae97668803a3f23795bfc71 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 14:11:37.149759  477134 start.go:364] duration metric: took 49.411µs to acquireMachinesLock for "cert-expiration-114321"
	I1102 14:11:37.149787  477134 start.go:96] Skipping create...Using existing machine configuration
	I1102 14:11:37.149792  477134 fix.go:54] fixHost starting: 
	I1102 14:11:37.150049  477134 cli_runner.go:164] Run: docker container inspect cert-expiration-114321 --format={{.State.Status}}
	I1102 14:11:37.168663  477134 fix.go:112] recreateIfNeeded on cert-expiration-114321: state=Running err=<nil>
	W1102 14:11:37.168684  477134 fix.go:138] unexpected machine state, will restart: <nil>
	I1102 14:11:37.171831  477134 out.go:252] * Updating the running docker "cert-expiration-114321" container ...
	I1102 14:11:37.171860  477134 machine.go:94] provisionDockerMachine start ...
	I1102 14:11:37.171948  477134 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-114321
	I1102 14:11:37.189065  477134 main.go:143] libmachine: Using SSH client type: native
	I1102 14:11:37.189395  477134 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33409 <nil> <nil>}
	I1102 14:11:37.189402  477134 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 14:11:37.342812  477134 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-114321
	
	I1102 14:11:37.342826  477134 ubuntu.go:182] provisioning hostname "cert-expiration-114321"
	I1102 14:11:37.342906  477134 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-114321
	I1102 14:11:37.361507  477134 main.go:143] libmachine: Using SSH client type: native
	I1102 14:11:37.361839  477134 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33409 <nil> <nil>}
	I1102 14:11:37.361849  477134 main.go:143] libmachine: About to run SSH command:
	sudo hostname cert-expiration-114321 && echo "cert-expiration-114321" | sudo tee /etc/hostname
	I1102 14:11:37.529777  477134 main.go:143] libmachine: SSH cmd err, output: <nil>: cert-expiration-114321
	
	I1102 14:11:37.529844  477134 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-114321
	I1102 14:11:37.547921  477134 main.go:143] libmachine: Using SSH client type: native
	I1102 14:11:37.548248  477134 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33409 <nil> <nil>}
	I1102 14:11:37.548266  477134 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-114321' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-114321/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-114321' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 14:11:37.699150  477134 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 14:11:37.699166  477134 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-293314/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-293314/.minikube}
	I1102 14:11:37.699184  477134 ubuntu.go:190] setting up certificates
	I1102 14:11:37.699192  477134 provision.go:84] configureAuth start
	I1102 14:11:37.699265  477134 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-114321
	I1102 14:11:37.717209  477134 provision.go:143] copyHostCerts
	I1102 14:11:37.717268  477134 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem, removing ...
	I1102 14:11:37.717276  477134 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem
	I1102 14:11:37.717356  477134 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem (1082 bytes)
	I1102 14:11:37.717451  477134 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem, removing ...
	I1102 14:11:37.717455  477134 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem
	I1102 14:11:37.717495  477134 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem (1123 bytes)
	I1102 14:11:37.717551  477134 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem, removing ...
	I1102 14:11:37.717554  477134 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem
	I1102 14:11:37.717578  477134 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem (1675 bytes)
	I1102 14:11:37.717623  477134 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem org=jenkins.cert-expiration-114321 san=[127.0.0.1 192.168.85.2 cert-expiration-114321 localhost minikube]
	I1102 14:11:39.000573  477134 provision.go:177] copyRemoteCerts
	I1102 14:11:39.000626  477134 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 14:11:39.000674  477134 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-114321
	I1102 14:11:39.019867  477134 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33409 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/cert-expiration-114321/id_rsa Username:docker}
	I1102 14:11:39.130725  477134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1102 14:11:39.151082  477134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1102 14:11:39.169387  477134 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1102 14:11:39.188777  477134 provision.go:87] duration metric: took 1.489560559s to configureAuth
	I1102 14:11:39.188795  477134 ubuntu.go:206] setting minikube options for container-runtime
	I1102 14:11:39.189000  477134 config.go:182] Loaded profile config "cert-expiration-114321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:11:39.189119  477134 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-114321
	I1102 14:11:39.207692  477134 main.go:143] libmachine: Using SSH client type: native
	I1102 14:11:39.208025  477134 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33409 <nil> <nil>}
	I1102 14:11:39.208037  477134 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	
	
	==> CRI-O <==
	Nov 02 14:11:26 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:26.209060529Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c08152c1-c4a7-4ea9-bf67-9400588090f6 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:11:26 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:26.210309145Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9a18fe9d-eb53-4717-9594-0ec1c341a15c name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:11:26 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:26.211551968Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q5mbz/dashboard-metrics-scraper" id=06c2682e-18c9-48fa-befc-cd54ae294fed name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:11:26 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:26.211646951Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:11:26 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:26.218423309Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:11:26 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:26.218992592Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:11:26 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:26.233662739Z" level=info msg="Created container 0715d806febbf6bc068ca3b10c928c40a83c034a3efe46db9df4e13ebd053192: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q5mbz/dashboard-metrics-scraper" id=06c2682e-18c9-48fa-befc-cd54ae294fed name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:11:26 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:26.234943937Z" level=info msg="Starting container: 0715d806febbf6bc068ca3b10c928c40a83c034a3efe46db9df4e13ebd053192" id=4399767f-853e-40b6-835c-335bc99293d8 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 14:11:26 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:26.236744507Z" level=info msg="Started container" PID=1677 containerID=0715d806febbf6bc068ca3b10c928c40a83c034a3efe46db9df4e13ebd053192 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q5mbz/dashboard-metrics-scraper id=4399767f-853e-40b6-835c-335bc99293d8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9a0de88277eb9b4c9ccfaebce0a6fa44b6032df83f3fb80dda4f2ff6903044d2
	Nov 02 14:11:26 old-k8s-version-873713 conmon[1675]: conmon 0715d806febbf6bc068c <ninfo>: container 1677 exited with status 1
	Nov 02 14:11:26 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:26.396448593Z" level=info msg="Removing container: 1f8025db902c9f740662dbe0ba7159079793a6df6b55ec500a4119a1d3ffcec1" id=f6f2a646-9377-4a62-bb10-63bfe0684a09 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 02 14:11:26 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:26.408960744Z" level=info msg="Error loading conmon cgroup of container 1f8025db902c9f740662dbe0ba7159079793a6df6b55ec500a4119a1d3ffcec1: cgroup deleted" id=f6f2a646-9377-4a62-bb10-63bfe0684a09 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 02 14:11:26 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:26.412248085Z" level=info msg="Removed container 1f8025db902c9f740662dbe0ba7159079793a6df6b55ec500a4119a1d3ffcec1: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q5mbz/dashboard-metrics-scraper" id=f6f2a646-9377-4a62-bb10-63bfe0684a09 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 02 14:11:31 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:31.916757367Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:11:31 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:31.92420797Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:11:31 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:31.924249537Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 14:11:31 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:31.924273734Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:11:31 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:31.927697832Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:11:31 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:31.927729454Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 14:11:31 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:31.927756187Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:11:31 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:31.931188071Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:11:31 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:31.931269671Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 14:11:31 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:31.931294952Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:11:31 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:31.935070542Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:11:31 old-k8s-version-873713 crio[682]: time="2025-11-02T14:11:31.935105652Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	0715d806febbf       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago       Exited              dashboard-metrics-scraper   2                   9a0de88277eb9       dashboard-metrics-scraper-5f989dc9cf-q5mbz       kubernetes-dashboard
	6cf1293ad6071       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           23 seconds ago       Running             storage-provisioner         2                   add8af5045fc6       storage-provisioner                              kube-system
	4e56dc5991fa1       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   34 seconds ago       Running             kubernetes-dashboard        0                   bee2b678c62e3       kubernetes-dashboard-8694d4445c-7nd7h            kubernetes-dashboard
	6ce84e7ad4fc6       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago       Running             busybox                     1                   bfe30e9786dc5       busybox                                          default
	0568c5f06769d       97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108                                           54 seconds ago       Running             coredns                     1                   09dda0eb13480       coredns-5dd5756b68-hjsnd                         kube-system
	e206f6558a61d       940f54a5bcae9dd4c97844fa36d12cc5d9078cffd5e677ad0df1528c12f3240d                                           54 seconds ago       Running             kube-proxy                  1                   b60a59c67e592       kube-proxy-ppcp5                                 kube-system
	05ea50ffb17a2       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           54 seconds ago       Exited              storage-provisioner         1                   add8af5045fc6       storage-provisioner                              kube-system
	b4d30d38fb366       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           54 seconds ago       Running             kindnet-cni                 1                   295e37993186c       kindnet-d876b                                    kube-system
	e3009eb696e3a       9cdd6470f48c8b127530b7ce6ea4b3524137984481e48bcde619735890840ace                                           About a minute ago   Running             etcd                        1                   f8eec2dda1024       etcd-old-k8s-version-873713                      kube-system
	6e8cd97cdf9ad       00543d2fe5d71095984891a0609ee504b81f9d72a69a0ad02039d4e135213766                                           About a minute ago   Running             kube-apiserver              1                   a2b9a6245d5f1       kube-apiserver-old-k8s-version-873713            kube-system
	7cbe729897f55       762dce4090c5f4789bb5dbb933d5b50bc1a2357d7739bbce30d949820e5a38ee                                           About a minute ago   Running             kube-scheduler              1                   cb24b6567f7d4       kube-scheduler-old-k8s-version-873713            kube-system
	318072e43c5b6       46cc66ccc7c19b4b30625b0aa4e178792add2385659205d7c6fcbd05d78c23e5                                           About a minute ago   Running             kube-controller-manager     1                   bc3d19fda648a       kube-controller-manager-old-k8s-version-873713   kube-system
	
	
	==> coredns [0568c5f06769d347955fc5d3451f1eac160d343349b9500cbd4c89a18f6916ca] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55161 - 65431 "HINFO IN 8995529865261232737.7691018843427380614. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.004797995s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-873713
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-873713
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a
	                    minikube.k8s.io/name=old-k8s-version-873713
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_02T14_09_43_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 02 Nov 2025 14:09:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-873713
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 02 Nov 2025 14:11:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 02 Nov 2025 14:11:21 +0000   Sun, 02 Nov 2025 14:09:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 02 Nov 2025 14:11:21 +0000   Sun, 02 Nov 2025 14:09:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 02 Nov 2025 14:11:21 +0000   Sun, 02 Nov 2025 14:09:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 02 Nov 2025 14:11:21 +0000   Sun, 02 Nov 2025 14:10:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-873713
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                316cad18-282c-471f-8314-7b2e61711c14
	  Boot ID:                    4c303cf4-c0b6-4c0d-aa1a-d7509feb15e7
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-5dd5756b68-hjsnd                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     111s
	  kube-system                 etcd-old-k8s-version-873713                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m6s
	  kube-system                 kindnet-d876b                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-old-k8s-version-873713             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-controller-manager-old-k8s-version-873713    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-proxy-ppcp5                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-old-k8s-version-873713             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-q5mbz        0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-7nd7h             0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 109s               kube-proxy       
	  Normal  Starting                 54s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m4s               kubelet          Node old-k8s-version-873713 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s               kubelet          Node old-k8s-version-873713 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s               kubelet          Node old-k8s-version-873713 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m4s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           112s               node-controller  Node old-k8s-version-873713 event: Registered Node old-k8s-version-873713 in Controller
	  Normal  NodeReady                97s                kubelet          Node old-k8s-version-873713 status is now: NodeReady
	  Normal  Starting                 62s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s (x8 over 62s)  kubelet          Node old-k8s-version-873713 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x8 over 62s)  kubelet          Node old-k8s-version-873713 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x8 over 62s)  kubelet          Node old-k8s-version-873713 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           44s                node-controller  Node old-k8s-version-873713 event: Registered Node old-k8s-version-873713 in Controller
	
	
	==> dmesg <==
	[Nov 2 13:49] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:50] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:51] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:52] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:54] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:55] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:56] overlayfs: idmapped layers are currently not supported
	[  +3.515963] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:57] overlayfs: idmapped layers are currently not supported
	[ +24.836033] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:58] overlayfs: idmapped layers are currently not supported
	[ +23.362553] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:59] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:01] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:02] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:03] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:04] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:06] overlayfs: idmapped layers are currently not supported
	[ +50.469589] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 2 14:07] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:08] overlayfs: idmapped layers are currently not supported
	[ +11.089512] overlayfs: idmapped layers are currently not supported
	[ +33.821233] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:09] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:10] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e3009eb696e3a3f606e23a07d321eac52a28e08956e2979dcb8e66d449bc6d55] <==
	{"level":"info","ts":"2025-11-02T14:10:45.471308Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-02T14:10:45.471358Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-02T14:10:45.471607Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-11-02T14:10:45.471699Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-02T14:10:45.471856Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-02T14:10:45.471917Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-02T14:10:45.478366Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-02T14:10:45.480872Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-02T14:10:45.482135Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-02T14:10:45.481314Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-02T14:10:45.48246Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-02T14:10:46.520717Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-02T14:10:46.52084Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-02T14:10:46.520895Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-02T14:10:46.52095Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-11-02T14:10:46.520981Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-02T14:10:46.521019Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-11-02T14:10:46.521052Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-02T14:10:46.524444Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-873713 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-02T14:10:46.524526Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-02T14:10:46.525496Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-02T14:10:46.525742Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-02T14:10:46.526577Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-02T14:10:46.527061Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-02T14:10:46.527111Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 14:11:46 up  2:54,  0 user,  load average: 2.04, 3.20, 2.77
	Linux old-k8s-version-873713 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b4d30d38fb366868b37ff296a262960cad053ff1896f9efd34edc961fdef64cc] <==
	I1102 14:10:51.631939       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1102 14:10:51.710859       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1102 14:10:51.711042       1 main.go:148] setting mtu 1500 for CNI 
	I1102 14:10:51.711056       1 main.go:178] kindnetd IP family: "ipv4"
	I1102 14:10:51.711069       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-02T14:10:51Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1102 14:10:51.912949       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1102 14:10:51.913032       1 controller.go:381] "Waiting for informer caches to sync"
	I1102 14:10:51.913079       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1102 14:10:51.913805       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1102 14:11:21.917523       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1102 14:11:21.917667       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1102 14:11:21.917783       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1102 14:11:21.917941       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1102 14:11:23.514091       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1102 14:11:23.514126       1 metrics.go:72] Registering metrics
	I1102 14:11:23.514207       1 controller.go:711] "Syncing nftables rules"
	I1102 14:11:31.916376       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1102 14:11:31.916466       1 main.go:301] handling current node
	I1102 14:11:41.919923       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1102 14:11:41.919953       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6e8cd97cdf9ad0306b693f6b2aff921d70b038a810142c363024afddee477af3] <==
	I1102 14:10:50.211194       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1102 14:10:50.227648       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1102 14:10:50.229551       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1102 14:10:50.229574       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1102 14:10:50.229764       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1102 14:10:50.236208       1 shared_informer.go:318] Caches are synced for configmaps
	I1102 14:10:50.236275       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1102 14:10:50.242172       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1102 14:10:50.248928       1 aggregator.go:166] initial CRD sync complete...
	I1102 14:10:50.248958       1 autoregister_controller.go:141] Starting autoregister controller
	I1102 14:10:50.248974       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1102 14:10:50.248981       1 cache.go:39] Caches are synced for autoregister controller
	I1102 14:10:50.272375       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1102 14:10:50.367521       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1102 14:10:50.931654       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1102 14:10:52.030361       1 controller.go:624] quota admission added evaluator for: namespaces
	I1102 14:10:52.085213       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1102 14:10:52.117206       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1102 14:10:52.130398       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1102 14:10:52.140994       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1102 14:10:52.242651       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.231.184"}
	I1102 14:10:52.313802       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.154.18"}
	I1102 14:11:02.626825       1 controller.go:624] quota admission added evaluator for: endpoints
	I1102 14:11:02.656827       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1102 14:11:02.743044       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [318072e43c5b6485705d146666c86c912d3dc14570e9880f1b2c467090c6391b] <==
	I1102 14:11:02.752771       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1102 14:11:02.757272       1 shared_informer.go:318] Caches are synced for service account
	I1102 14:11:02.769935       1 shared_informer.go:318] Caches are synced for resource quota
	I1102 14:11:02.770503       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-q5mbz"
	I1102 14:11:02.779838       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-7nd7h"
	I1102 14:11:02.791610       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="44.921206ms"
	I1102 14:11:02.795258       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="42.687329ms"
	I1102 14:11:02.814676       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="22.211351ms"
	I1102 14:11:02.817208       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="69.76µs"
	I1102 14:11:02.824483       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="100.374µs"
	I1102 14:11:02.843707       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="48.237773ms"
	I1102 14:11:02.843810       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="65.855µs"
	I1102 14:11:02.848851       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="99.521µs"
	I1102 14:11:03.182054       1 shared_informer.go:318] Caches are synced for garbage collector
	I1102 14:11:03.224609       1 shared_informer.go:318] Caches are synced for garbage collector
	I1102 14:11:03.224650       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1102 14:11:08.351733       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="78.54µs"
	I1102 14:11:09.366344       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="53.802µs"
	I1102 14:11:12.383223       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="16.931967ms"
	I1102 14:11:12.383676       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="74.552µs"
	I1102 14:11:13.107188       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="98.085µs"
	I1102 14:11:26.413589       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="54.819µs"
	I1102 14:11:26.967204       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.119965ms"
	I1102 14:11:26.967388       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="52.226µs"
	I1102 14:11:33.118098       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.968µs"
	
	
	==> kube-proxy [e206f6558a61d96d9da578d9c25b7fe05996f143a18ece7b8dadbb7f9822b039] <==
	I1102 14:10:51.805627       1 server_others.go:69] "Using iptables proxy"
	I1102 14:10:51.834857       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1102 14:10:51.874909       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 14:10:51.876587       1 server_others.go:152] "Using iptables Proxier"
	I1102 14:10:51.876618       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1102 14:10:51.876632       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1102 14:10:51.876659       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1102 14:10:51.876883       1 server.go:846] "Version info" version="v1.28.0"
	I1102 14:10:51.876892       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 14:10:51.880888       1 config.go:188] "Starting service config controller"
	I1102 14:10:51.880904       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1102 14:10:51.880921       1 config.go:97] "Starting endpoint slice config controller"
	I1102 14:10:51.880925       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1102 14:10:51.881286       1 config.go:315] "Starting node config controller"
	I1102 14:10:51.881292       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1102 14:10:51.981038       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1102 14:10:51.981056       1 shared_informer.go:318] Caches are synced for service config
	I1102 14:10:51.981367       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [7cbe729897f5594064f2041cb37d3a309e248b10de54a0077bb7ab8a192cbf98] <==
	I1102 14:10:48.050634       1 serving.go:348] Generated self-signed cert in-memory
	W1102 14:10:50.242710       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1102 14:10:50.242785       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1102 14:10:50.242819       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1102 14:10:50.242849       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1102 14:10:50.305196       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1102 14:10:50.305353       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 14:10:50.306969       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 14:10:50.307065       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1102 14:10:50.308376       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1102 14:10:50.308464       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1102 14:10:50.408454       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 02 14:11:02 old-k8s-version-873713 kubelet[808]: I1102 14:11:02.796350     808 topology_manager.go:215] "Topology Admit Handler" podUID="176cf84b-bc2d-4f64-9bd0-b6375d4daaa5" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-7nd7h"
	Nov 02 14:11:02 old-k8s-version-873713 kubelet[808]: I1102 14:11:02.949240     808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp8pd\" (UniqueName: \"kubernetes.io/projected/55262db6-6c5b-467f-b999-b64511792d4d-kube-api-access-mp8pd\") pod \"dashboard-metrics-scraper-5f989dc9cf-q5mbz\" (UID: \"55262db6-6c5b-467f-b999-b64511792d4d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q5mbz"
	Nov 02 14:11:02 old-k8s-version-873713 kubelet[808]: I1102 14:11:02.949304     808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/176cf84b-bc2d-4f64-9bd0-b6375d4daaa5-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-7nd7h\" (UID: \"176cf84b-bc2d-4f64-9bd0-b6375d4daaa5\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7nd7h"
	Nov 02 14:11:02 old-k8s-version-873713 kubelet[808]: I1102 14:11:02.949336     808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w25cd\" (UniqueName: \"kubernetes.io/projected/176cf84b-bc2d-4f64-9bd0-b6375d4daaa5-kube-api-access-w25cd\") pod \"kubernetes-dashboard-8694d4445c-7nd7h\" (UID: \"176cf84b-bc2d-4f64-9bd0-b6375d4daaa5\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7nd7h"
	Nov 02 14:11:02 old-k8s-version-873713 kubelet[808]: I1102 14:11:02.949363     808 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/55262db6-6c5b-467f-b999-b64511792d4d-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-q5mbz\" (UID: \"55262db6-6c5b-467f-b999-b64511792d4d\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q5mbz"
	Nov 02 14:11:03 old-k8s-version-873713 kubelet[808]: W1102 14:11:03.115412     808 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/4ee7404b4a6ac58ace4f7c3005215815a212bd879029eed27a35560a4c57fe56/crio-9a0de88277eb9b4c9ccfaebce0a6fa44b6032df83f3fb80dda4f2ff6903044d2 WatchSource:0}: Error finding container 9a0de88277eb9b4c9ccfaebce0a6fa44b6032df83f3fb80dda4f2ff6903044d2: Status 404 returned error can't find the container with id 9a0de88277eb9b4c9ccfaebce0a6fa44b6032df83f3fb80dda4f2ff6903044d2
	Nov 02 14:11:03 old-k8s-version-873713 kubelet[808]: W1102 14:11:03.135185     808 manager.go:1159] Failed to process watch event {EventType:0 Name:/docker/4ee7404b4a6ac58ace4f7c3005215815a212bd879029eed27a35560a4c57fe56/crio-bee2b678c62e310e65447227f479e583a2a9bd6d3c38b1b728f65575d51230e0 WatchSource:0}: Error finding container bee2b678c62e310e65447227f479e583a2a9bd6d3c38b1b728f65575d51230e0: Status 404 returned error can't find the container with id bee2b678c62e310e65447227f479e583a2a9bd6d3c38b1b728f65575d51230e0
	Nov 02 14:11:08 old-k8s-version-873713 kubelet[808]: I1102 14:11:08.338318     808 scope.go:117] "RemoveContainer" containerID="1fa080268a3b1c567ca1e56bc4fdc6320572ba34379312b8723d90bd5921fc0e"
	Nov 02 14:11:09 old-k8s-version-873713 kubelet[808]: I1102 14:11:09.342564     808 scope.go:117] "RemoveContainer" containerID="1f8025db902c9f740662dbe0ba7159079793a6df6b55ec500a4119a1d3ffcec1"
	Nov 02 14:11:09 old-k8s-version-873713 kubelet[808]: E1102 14:11:09.342964     808 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-q5mbz_kubernetes-dashboard(55262db6-6c5b-467f-b999-b64511792d4d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q5mbz" podUID="55262db6-6c5b-467f-b999-b64511792d4d"
	Nov 02 14:11:09 old-k8s-version-873713 kubelet[808]: I1102 14:11:09.344307     808 scope.go:117] "RemoveContainer" containerID="1fa080268a3b1c567ca1e56bc4fdc6320572ba34379312b8723d90bd5921fc0e"
	Nov 02 14:11:13 old-k8s-version-873713 kubelet[808]: I1102 14:11:13.092266     808 scope.go:117] "RemoveContainer" containerID="1f8025db902c9f740662dbe0ba7159079793a6df6b55ec500a4119a1d3ffcec1"
	Nov 02 14:11:13 old-k8s-version-873713 kubelet[808]: E1102 14:11:13.093079     808 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-q5mbz_kubernetes-dashboard(55262db6-6c5b-467f-b999-b64511792d4d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q5mbz" podUID="55262db6-6c5b-467f-b999-b64511792d4d"
	Nov 02 14:11:13 old-k8s-version-873713 kubelet[808]: I1102 14:11:13.109785     808 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7nd7h" podStartSLOduration=2.383890499 podCreationTimestamp="2025-11-02 14:11:02 +0000 UTC" firstStartedPulling="2025-11-02 14:11:03.143717523 +0000 UTC m=+19.216347886" lastFinishedPulling="2025-11-02 14:11:11.866249977 +0000 UTC m=+27.938880340" observedRunningTime="2025-11-02 14:11:12.367387172 +0000 UTC m=+28.440017526" watchObservedRunningTime="2025-11-02 14:11:13.106422953 +0000 UTC m=+29.179053316"
	Nov 02 14:11:22 old-k8s-version-873713 kubelet[808]: I1102 14:11:22.379136     808 scope.go:117] "RemoveContainer" containerID="05ea50ffb17a2c17ae3167c803ef030a793314febf71bcbb6e67e304616db8d1"
	Nov 02 14:11:26 old-k8s-version-873713 kubelet[808]: I1102 14:11:26.207858     808 scope.go:117] "RemoveContainer" containerID="1f8025db902c9f740662dbe0ba7159079793a6df6b55ec500a4119a1d3ffcec1"
	Nov 02 14:11:26 old-k8s-version-873713 kubelet[808]: I1102 14:11:26.392504     808 scope.go:117] "RemoveContainer" containerID="1f8025db902c9f740662dbe0ba7159079793a6df6b55ec500a4119a1d3ffcec1"
	Nov 02 14:11:26 old-k8s-version-873713 kubelet[808]: I1102 14:11:26.392719     808 scope.go:117] "RemoveContainer" containerID="0715d806febbf6bc068ca3b10c928c40a83c034a3efe46db9df4e13ebd053192"
	Nov 02 14:11:26 old-k8s-version-873713 kubelet[808]: E1102 14:11:26.393051     808 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-q5mbz_kubernetes-dashboard(55262db6-6c5b-467f-b999-b64511792d4d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q5mbz" podUID="55262db6-6c5b-467f-b999-b64511792d4d"
	Nov 02 14:11:33 old-k8s-version-873713 kubelet[808]: I1102 14:11:33.092486     808 scope.go:117] "RemoveContainer" containerID="0715d806febbf6bc068ca3b10c928c40a83c034a3efe46db9df4e13ebd053192"
	Nov 02 14:11:33 old-k8s-version-873713 kubelet[808]: E1102 14:11:33.093309     808 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-q5mbz_kubernetes-dashboard(55262db6-6c5b-467f-b999-b64511792d4d)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-q5mbz" podUID="55262db6-6c5b-467f-b999-b64511792d4d"
	Nov 02 14:11:41 old-k8s-version-873713 kubelet[808]: I1102 14:11:41.179100     808 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 02 14:11:41 old-k8s-version-873713 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 02 14:11:41 old-k8s-version-873713 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 02 14:11:41 old-k8s-version-873713 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [4e56dc5991fa15eba86cd1282a7f01dec8198efa8716d7d7b2cdc4ef02f81353] <==
	2025/11/02 14:11:11 Using namespace: kubernetes-dashboard
	2025/11/02 14:11:11 Using in-cluster config to connect to apiserver
	2025/11/02 14:11:11 Using secret token for csrf signing
	2025/11/02 14:11:11 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/02 14:11:11 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/02 14:11:11 Successful initial request to the apiserver, version: v1.28.0
	2025/11/02 14:11:11 Generating JWE encryption key
	2025/11/02 14:11:11 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/02 14:11:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/02 14:11:12 Initializing JWE encryption key from synchronized object
	2025/11/02 14:11:12 Creating in-cluster Sidecar client
	2025/11/02 14:11:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/02 14:11:12 Serving insecurely on HTTP port: 9090
	2025/11/02 14:11:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/02 14:11:11 Starting overwatch
	
	
	==> storage-provisioner [05ea50ffb17a2c17ae3167c803ef030a793314febf71bcbb6e67e304616db8d1] <==
	I1102 14:10:51.712009       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1102 14:11:21.714295       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [6cf1293ad607139235172951e0f6dd2839ab7dfdb284bac7be5c819c3ae0637b] <==
	I1102 14:11:22.430524       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1102 14:11:22.446879       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1102 14:11:22.447013       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1102 14:11:39.845192       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1102 14:11:39.845356       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-873713_3c8ee00c-08cb-44df-9291-4bdbe3eba8a9!
	I1102 14:11:39.852015       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bed908b3-2f1a-4e3a-8d32-4d7ab52fe965", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-873713_3c8ee00c-08cb-44df-9291-4bdbe3eba8a9 became leader
	I1102 14:11:39.946020       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-873713_3c8ee00c-08cb-44df-9291-4bdbe3eba8a9!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-873713 -n old-k8s-version-873713
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-873713 -n old-k8s-version-873713: exit status 2 (449.901243ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-873713 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-150469 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-150469 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (270.296877ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T14:13:04Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p no-preload-150469 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-150469 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-150469 describe deploy/metrics-server -n kube-system: exit status 1 (88.892353ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-150469 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-150469
helpers_test.go:243: (dbg) docker inspect no-preload-150469:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "aa4ae44e602159b250d81a350b59577c08056d4e598bda926fd0010cba92fa48",
	        "Created": "2025-11-02T14:11:51.659937726Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 479421,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-02T14:11:51.742995047Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/aa4ae44e602159b250d81a350b59577c08056d4e598bda926fd0010cba92fa48/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/aa4ae44e602159b250d81a350b59577c08056d4e598bda926fd0010cba92fa48/hostname",
	        "HostsPath": "/var/lib/docker/containers/aa4ae44e602159b250d81a350b59577c08056d4e598bda926fd0010cba92fa48/hosts",
	        "LogPath": "/var/lib/docker/containers/aa4ae44e602159b250d81a350b59577c08056d4e598bda926fd0010cba92fa48/aa4ae44e602159b250d81a350b59577c08056d4e598bda926fd0010cba92fa48-json.log",
	        "Name": "/no-preload-150469",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-150469:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-150469",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "aa4ae44e602159b250d81a350b59577c08056d4e598bda926fd0010cba92fa48",
	                "LowerDir": "/var/lib/docker/overlay2/8a6aaf28eb401f956308bc06ae686510e116c66e0d46b46263a0d8a79fbe08f8-init/diff:/var/lib/docker/overlay2/53058afe6acd7391f639f551e07f446f6a39a991b699aea18507cb21a1652e97/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8a6aaf28eb401f956308bc06ae686510e116c66e0d46b46263a0d8a79fbe08f8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8a6aaf28eb401f956308bc06ae686510e116c66e0d46b46263a0d8a79fbe08f8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8a6aaf28eb401f956308bc06ae686510e116c66e0d46b46263a0d8a79fbe08f8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-150469",
	                "Source": "/var/lib/docker/volumes/no-preload-150469/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-150469",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-150469",
	                "name.minikube.sigs.k8s.io": "no-preload-150469",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7521bf582d114b01d78a0f9bcef9456bbdb0df698c79ba81107d67f33f3ecb5e",
	            "SandboxKey": "/var/run/docker/netns/7521bf582d11",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-150469": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:db:d3:f1:c0:f9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "04b125ad348b31edf412b7cd44a1ba32814c5e6b6c1a080d912d4d879cabcf90",
	                    "EndpointID": "eecb968a95cb609724a0ae778f6d22c3af222c46107200cb78a992d33b297de8",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-150469",
	                        "aa4ae44e6021"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-150469 -n no-preload-150469
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-150469 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-150469 logs -n 25: (1.167314431s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p cilium-143736 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ ssh     │ -p cilium-143736 sudo crio config                                                                                                                                                                                                             │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │                     │
	│ delete  │ -p cilium-143736                                                                                                                                                                                                                              │ cilium-143736            │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │ 02 Nov 25 14:07 UTC │
	│ start   │ -p force-systemd-env-263133 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                                    │ force-systemd-env-263133 │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │ 02 Nov 25 14:08 UTC │
	│ delete  │ -p pause-061518                                                                                                                                                                                                                               │ pause-061518             │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │ 02 Nov 25 14:07 UTC │
	│ start   │ -p cert-expiration-114321 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-114321   │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │ 02 Nov 25 14:08 UTC │
	│ delete  │ -p force-systemd-env-263133                                                                                                                                                                                                                   │ force-systemd-env-263133 │ jenkins │ v1.37.0 │ 02 Nov 25 14:08 UTC │ 02 Nov 25 14:08 UTC │
	│ start   │ -p cert-options-935084 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-935084      │ jenkins │ v1.37.0 │ 02 Nov 25 14:08 UTC │ 02 Nov 25 14:09 UTC │
	│ ssh     │ cert-options-935084 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-935084      │ jenkins │ v1.37.0 │ 02 Nov 25 14:09 UTC │ 02 Nov 25 14:09 UTC │
	│ ssh     │ -p cert-options-935084 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-935084      │ jenkins │ v1.37.0 │ 02 Nov 25 14:09 UTC │ 02 Nov 25 14:09 UTC │
	│ delete  │ -p cert-options-935084                                                                                                                                                                                                                        │ cert-options-935084      │ jenkins │ v1.37.0 │ 02 Nov 25 14:09 UTC │ 02 Nov 25 14:09 UTC │
	│ start   │ -p old-k8s-version-873713 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:09 UTC │ 02 Nov 25 14:10 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-873713 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:10 UTC │                     │
	│ stop    │ -p old-k8s-version-873713 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:10 UTC │ 02 Nov 25 14:10 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-873713 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:10 UTC │ 02 Nov 25 14:10 UTC │
	│ start   │ -p old-k8s-version-873713 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:10 UTC │ 02 Nov 25 14:11 UTC │
	│ start   │ -p cert-expiration-114321 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-114321   │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │                     │
	│ image   │ old-k8s-version-873713 image list --format=json                                                                                                                                                                                               │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │ 02 Nov 25 14:11 UTC │
	│ pause   │ -p old-k8s-version-873713 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │                     │
	│ delete  │ -p old-k8s-version-873713                                                                                                                                                                                                                     │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │ 02 Nov 25 14:11 UTC │
	│ delete  │ -p old-k8s-version-873713                                                                                                                                                                                                                     │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │ 02 Nov 25 14:11 UTC │
	│ start   │ -p no-preload-150469 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-150469        │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │ 02 Nov 25 14:12 UTC │
	│ addons  │ enable metrics-server -p no-preload-150469 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-150469        │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 14:11:50
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 14:11:50.520139  479115 out.go:360] Setting OutFile to fd 1 ...
	I1102 14:11:50.520376  479115 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:11:50.520408  479115 out.go:374] Setting ErrFile to fd 2...
	I1102 14:11:50.520428  479115 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:11:50.520726  479115 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 14:11:50.521191  479115 out.go:368] Setting JSON to false
	I1102 14:11:50.522161  479115 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10463,"bootTime":1762082248,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1102 14:11:50.522317  479115 start.go:143] virtualization:  
	I1102 14:11:50.526330  479115 out.go:179] * [no-preload-150469] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1102 14:11:50.530830  479115 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 14:11:50.530890  479115 notify.go:221] Checking for updates...
	I1102 14:11:50.534786  479115 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 14:11:50.538111  479115 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-293314/kubeconfig
	I1102 14:11:50.541380  479115 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-293314/.minikube
	I1102 14:11:50.544524  479115 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1102 14:11:50.547477  479115 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 14:11:50.551099  479115 config.go:182] Loaded profile config "cert-expiration-114321": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:11:50.551245  479115 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 14:11:50.572620  479115 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1102 14:11:50.572743  479115 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:11:50.632418  479115 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-02 14:11:50.623368213 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:11:50.632526  479115 docker.go:319] overlay module found
	I1102 14:11:50.637620  479115 out.go:179] * Using the docker driver based on user configuration
	I1102 14:11:50.640525  479115 start.go:309] selected driver: docker
	I1102 14:11:50.640545  479115 start.go:930] validating driver "docker" against <nil>
	I1102 14:11:50.640559  479115 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 14:11:50.641284  479115 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:11:50.701215  479115 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-02 14:11:50.692239923 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:11:50.701381  479115 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1102 14:11:50.701618  479115 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 14:11:50.704641  479115 out.go:179] * Using Docker driver with root privileges
	I1102 14:11:50.707512  479115 cni.go:84] Creating CNI manager for ""
	I1102 14:11:50.707576  479115 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 14:11:50.707590  479115 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1102 14:11:50.707666  479115 start.go:353] cluster config:
	{Name:no-preload-150469 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-150469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1102 14:11:50.710868  479115 out.go:179] * Starting "no-preload-150469" primary control-plane node in "no-preload-150469" cluster
	I1102 14:11:50.713697  479115 cache.go:124] Beginning downloading kic base image for docker with crio
	I1102 14:11:50.716601  479115 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1102 14:11:50.719481  479115 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 14:11:50.719568  479115 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1102 14:11:50.719604  479115 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/config.json ...
	I1102 14:11:50.719635  479115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/config.json: {Name:mka8ec2925a2c027200a753c9715f743ceb3dcd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:11:50.719901  479115 cache.go:107] acquiring lock: {Name:mk0d530038ef870bf58d0363da9b31652bc1ae14 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 14:11:50.719963  479115 cache.go:115] /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1102 14:11:50.719976  479115 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 84.915µs
	I1102 14:11:50.719990  479115 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1102 14:11:50.720001  479115 cache.go:107] acquiring lock: {Name:mkd6260a7653c8ae81944696f59074ca9daf4496 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 14:11:50.720081  479115 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1102 14:11:50.720449  479115 cache.go:107] acquiring lock: {Name:mk116eeda5090a9bf257baa304e319dd3d9cf8b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 14:11:50.720564  479115 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1102 14:11:50.720820  479115 cache.go:107] acquiring lock: {Name:mk9ef60d35c5b49e869c6203a9147a3219b4e126 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 14:11:50.720952  479115 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1102 14:11:50.721220  479115 cache.go:107] acquiring lock: {Name:mke6221cf85cbade96e9579d3f9dbb6cf440ab99 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 14:11:50.721356  479115 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1102 14:11:50.721631  479115 cache.go:107] acquiring lock: {Name:mkaba104a6722ee8e853a6d14d0da3961a192835 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 14:11:50.721755  479115 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1102 14:11:50.721996  479115 cache.go:107] acquiring lock: {Name:mk92a42e51bf17c175d4fb90830e996d213a6765 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 14:11:50.722124  479115 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1102 14:11:50.722359  479115 cache.go:107] acquiring lock: {Name:mk498d2d373bae8d5599913577ef44eaf6a69150 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 14:11:50.722509  479115 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1102 14:11:50.724737  479115 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1102 14:11:50.725082  479115 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1102 14:11:50.725371  479115 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1102 14:11:50.725522  479115 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1102 14:11:50.725627  479115 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1102 14:11:50.726144  479115 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1102 14:11:50.727124  479115 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1102 14:11:50.746007  479115 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1102 14:11:50.746029  479115 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1102 14:11:50.746047  479115 cache.go:233] Successfully downloaded all kic artifacts
	I1102 14:11:50.746070  479115 start.go:360] acquireMachinesLock for no-preload-150469: {Name:mkd14c3163b545133d2b0afdace0f1473474f9aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 14:11:50.746171  479115 start.go:364] duration metric: took 81.535µs to acquireMachinesLock for "no-preload-150469"
	I1102 14:11:50.746202  479115 start.go:93] Provisioning new machine with config: &{Name:no-preload-150469 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-150469 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 14:11:50.746276  479115 start.go:125] createHost starting for "" (driver="docker")
	I1102 14:11:47.068104  477134 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1102 14:11:50.749843  479115 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1102 14:11:50.750088  479115 start.go:159] libmachine.API.Create for "no-preload-150469" (driver="docker")
	I1102 14:11:50.750127  479115 client.go:173] LocalClient.Create starting
	I1102 14:11:50.750200  479115 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem
	I1102 14:11:50.750237  479115 main.go:143] libmachine: Decoding PEM data...
	I1102 14:11:50.750255  479115 main.go:143] libmachine: Parsing certificate...
	I1102 14:11:50.750318  479115 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem
	I1102 14:11:50.750339  479115 main.go:143] libmachine: Decoding PEM data...
	I1102 14:11:50.750363  479115 main.go:143] libmachine: Parsing certificate...
	I1102 14:11:50.750797  479115 cli_runner.go:164] Run: docker network inspect no-preload-150469 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1102 14:11:50.776165  479115 cli_runner.go:211] docker network inspect no-preload-150469 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1102 14:11:50.776236  479115 network_create.go:284] running [docker network inspect no-preload-150469] to gather additional debugging logs...
	I1102 14:11:50.776270  479115 cli_runner.go:164] Run: docker network inspect no-preload-150469
	W1102 14:11:50.792479  479115 cli_runner.go:211] docker network inspect no-preload-150469 returned with exit code 1
	I1102 14:11:50.792507  479115 network_create.go:287] error running [docker network inspect no-preload-150469]: docker network inspect no-preload-150469: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-150469 not found
	I1102 14:11:50.792520  479115 network_create.go:289] output of [docker network inspect no-preload-150469]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-150469 not found
	
	** /stderr **
	I1102 14:11:50.792623  479115 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 14:11:50.808993  479115 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ddf319108ac9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:66:f7:2d:49:67:ff} reservation:<nil>}
	I1102 14:11:50.809369  479115 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-30b945568040 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:b6:b2:b0:cb:49:d7} reservation:<nil>}
	I1102 14:11:50.809602  479115 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d23a3a2e266d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fa:42:95:8e:ae:52} reservation:<nil>}
	I1102 14:11:50.809995  479115 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a43a10}
	I1102 14:11:50.810020  479115 network_create.go:124] attempt to create docker network no-preload-150469 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1102 14:11:50.810077  479115 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-150469 no-preload-150469
	I1102 14:11:50.878220  479115 network_create.go:108] docker network no-preload-150469 192.168.76.0/24 created
	I1102 14:11:50.878249  479115 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-150469" container
	I1102 14:11:50.878323  479115 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1102 14:11:50.894967  479115 cli_runner.go:164] Run: docker volume create no-preload-150469 --label name.minikube.sigs.k8s.io=no-preload-150469 --label created_by.minikube.sigs.k8s.io=true
	I1102 14:11:50.912862  479115 oci.go:103] Successfully created a docker volume no-preload-150469
	I1102 14:11:50.912971  479115 cli_runner.go:164] Run: docker run --rm --name no-preload-150469-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-150469 --entrypoint /usr/bin/test -v no-preload-150469:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1102 14:11:51.047332  479115 cache.go:162] opening:  /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1102 14:11:51.067998  479115 cache.go:162] opening:  /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1102 14:11:51.070414  479115 cache.go:162] opening:  /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1102 14:11:51.077554  479115 cache.go:162] opening:  /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1102 14:11:51.097970  479115 cache.go:162] opening:  /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1102 14:11:51.099248  479115 cache.go:162] opening:  /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1102 14:11:51.106902  479115 cache.go:162] opening:  /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1102 14:11:51.158792  479115 cache.go:157] /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1102 14:11:51.158824  479115 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 437.196445ms
	I1102 14:11:51.158842  479115 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1102 14:11:51.569332  479115 oci.go:107] Successfully prepared a docker volume no-preload-150469
	I1102 14:11:51.569364  479115 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1102 14:11:51.569486  479115 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1102 14:11:51.569594  479115 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1102 14:11:51.600075  479115 cache.go:157] /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1102 14:11:51.600097  479115 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 878.88031ms
	I1102 14:11:51.600109  479115 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1102 14:11:51.644427  479115 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-150469 --name no-preload-150469 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-150469 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-150469 --network no-preload-150469 --ip 192.168.76.2 --volume no-preload-150469:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1102 14:11:51.987896  479115 cache.go:157] /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1102 14:11:51.987964  479115 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.267147812s
	I1102 14:11:51.987991  479115 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1102 14:11:52.071359  479115 cli_runner.go:164] Run: docker container inspect no-preload-150469 --format={{.State.Running}}
	I1102 14:11:52.099451  479115 cache.go:157] /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1102 14:11:52.099475  479115 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.379030251s
	I1102 14:11:52.099487  479115 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1102 14:11:52.127459  479115 cli_runner.go:164] Run: docker container inspect no-preload-150469 --format={{.State.Status}}
	I1102 14:11:52.153962  479115 cli_runner.go:164] Run: docker exec no-preload-150469 stat /var/lib/dpkg/alternatives/iptables
	I1102 14:11:52.223339  479115 oci.go:144] the created container "no-preload-150469" has a running status.
	I1102 14:11:52.223374  479115 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21808-293314/.minikube/machines/no-preload-150469/id_rsa...
	I1102 14:11:52.250398  479115 cache.go:157] /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1102 14:11:52.250467  479115 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.530464347s
	I1102 14:11:52.250494  479115 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1102 14:11:52.305007  479115 cache.go:157] /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1102 14:11:52.305163  479115 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.582806358s
	I1102 14:11:52.305194  479115 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1102 14:11:52.536303  479115 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21808-293314/.minikube/machines/no-preload-150469/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1102 14:11:52.557894  479115 cli_runner.go:164] Run: docker container inspect no-preload-150469 --format={{.State.Status}}
	I1102 14:11:52.582549  479115 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1102 14:11:52.582703  479115 kic_runner.go:114] Args: [docker exec --privileged no-preload-150469 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1102 14:11:52.693820  479115 cli_runner.go:164] Run: docker container inspect no-preload-150469 --format={{.State.Status}}
	I1102 14:11:52.731209  479115 machine.go:94] provisionDockerMachine start ...
	I1102 14:11:52.731306  479115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-150469
	I1102 14:11:52.754035  479115 main.go:143] libmachine: Using SSH client type: native
	I1102 14:11:52.754364  479115 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33431 <nil> <nil>}
	I1102 14:11:52.754374  479115 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 14:11:52.755111  479115 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1102 14:11:53.405119  479115 cache.go:157] /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1102 14:11:53.405149  479115 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 2.683155658s
	I1102 14:11:53.405162  479115 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1102 14:11:53.405200  479115 cache.go:87] Successfully saved all images to host disk.
	I1102 14:11:55.906085  479115 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-150469
	
	I1102 14:11:55.906115  479115 ubuntu.go:182] provisioning hostname "no-preload-150469"
	I1102 14:11:55.906180  479115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-150469
	I1102 14:11:55.925531  479115 main.go:143] libmachine: Using SSH client type: native
	I1102 14:11:55.925839  479115 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33431 <nil> <nil>}
	I1102 14:11:55.925850  479115 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-150469 && echo "no-preload-150469" | sudo tee /etc/hostname
	I1102 14:11:56.088795  479115 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-150469
	
	I1102 14:11:56.088898  479115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-150469
	I1102 14:11:56.107498  479115 main.go:143] libmachine: Using SSH client type: native
	I1102 14:11:56.107815  479115 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33431 <nil> <nil>}
	I1102 14:11:56.107837  479115 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-150469' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-150469/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-150469' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 14:11:56.254695  479115 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 14:11:56.254721  479115 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-293314/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-293314/.minikube}
	I1102 14:11:56.254796  479115 ubuntu.go:190] setting up certificates
	I1102 14:11:56.254807  479115 provision.go:84] configureAuth start
	I1102 14:11:56.254886  479115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-150469
	I1102 14:11:56.271326  479115 provision.go:143] copyHostCerts
	I1102 14:11:56.271397  479115 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem, removing ...
	I1102 14:11:56.271410  479115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem
	I1102 14:11:56.271489  479115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem (1082 bytes)
	I1102 14:11:56.271585  479115 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem, removing ...
	I1102 14:11:56.271596  479115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem
	I1102 14:11:56.271624  479115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem (1123 bytes)
	I1102 14:11:56.271682  479115 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem, removing ...
	I1102 14:11:56.271691  479115 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem
	I1102 14:11:56.271717  479115 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem (1675 bytes)
	I1102 14:11:56.271769  479115 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem org=jenkins.no-preload-150469 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-150469]
	I1102 14:11:56.472215  479115 provision.go:177] copyRemoteCerts
	I1102 14:11:56.472309  479115 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 14:11:56.472360  479115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-150469
	I1102 14:11:56.493609  479115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33431 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/no-preload-150469/id_rsa Username:docker}
	I1102 14:11:56.598292  479115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1102 14:11:56.615403  479115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1102 14:11:56.633372  479115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1102 14:11:56.651337  479115 provision.go:87] duration metric: took 396.502304ms to configureAuth
	I1102 14:11:56.651366  479115 ubuntu.go:206] setting minikube options for container-runtime
	I1102 14:11:56.651545  479115 config.go:182] Loaded profile config "no-preload-150469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:11:56.651658  479115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-150469
	I1102 14:11:56.668576  479115 main.go:143] libmachine: Using SSH client type: native
	I1102 14:11:56.668894  479115 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33431 <nil> <nil>}
	I1102 14:11:56.668915  479115 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1102 14:11:57.002384  479115 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1102 14:11:57.002409  479115 machine.go:97] duration metric: took 4.271177213s to provisionDockerMachine
	I1102 14:11:57.002419  479115 client.go:176] duration metric: took 6.252281017s to LocalClient.Create
	I1102 14:11:57.002433  479115 start.go:167] duration metric: took 6.252347873s to libmachine.API.Create "no-preload-150469"
	I1102 14:11:57.002441  479115 start.go:293] postStartSetup for "no-preload-150469" (driver="docker")
	I1102 14:11:57.002454  479115 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1102 14:11:57.002520  479115 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1102 14:11:57.002565  479115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-150469
	I1102 14:11:57.021222  479115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33431 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/no-preload-150469/id_rsa Username:docker}
	I1102 14:11:57.126477  479115 ssh_runner.go:195] Run: cat /etc/os-release
	I1102 14:11:57.129760  479115 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1102 14:11:57.129796  479115 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1102 14:11:57.129807  479115 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-293314/.minikube/addons for local assets ...
	I1102 14:11:57.129866  479115 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-293314/.minikube/files for local assets ...
	I1102 14:11:57.129954  479115 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem -> 2951742.pem in /etc/ssl/certs
	I1102 14:11:57.130060  479115 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1102 14:11:57.137244  479115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem --> /etc/ssl/certs/2951742.pem (1708 bytes)
	I1102 14:11:57.154087  479115 start.go:296] duration metric: took 151.628324ms for postStartSetup
	I1102 14:11:57.154440  479115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-150469
	I1102 14:11:57.171072  479115 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/config.json ...
	I1102 14:11:57.171348  479115 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 14:11:57.171399  479115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-150469
	I1102 14:11:57.189010  479115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33431 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/no-preload-150469/id_rsa Username:docker}
	I1102 14:11:57.291487  479115 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1102 14:11:57.295735  479115 start.go:128] duration metric: took 6.549443714s to createHost
	I1102 14:11:57.295763  479115 start.go:83] releasing machines lock for "no-preload-150469", held for 6.549578238s
	I1102 14:11:57.295849  479115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-150469
	I1102 14:11:57.312023  479115 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem (1338 bytes)
	W1102 14:11:57.312088  479115 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174_empty.pem, impossibly tiny 0 bytes
	I1102 14:11:57.312102  479115 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem (1679 bytes)
	I1102 14:11:57.312126  479115 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem (1082 bytes)
	I1102 14:11:57.312155  479115 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem (1123 bytes)
	I1102 14:11:57.312190  479115 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem (1675 bytes)
	I1102 14:11:57.312234  479115 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem (1708 bytes)
	I1102 14:11:57.312298  479115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 14:11:57.312352  479115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-150469
	I1102 14:11:57.328557  479115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33431 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/no-preload-150469/id_rsa Username:docker}
	I1102 14:11:57.444128  479115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem --> /usr/share/ca-certificates/295174.pem (1338 bytes)
	I1102 14:11:57.461675  479115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem --> /usr/share/ca-certificates/2951742.pem (1708 bytes)
	I1102 14:11:57.479027  479115 ssh_runner.go:195] Run: openssl version
	I1102 14:11:57.485225  479115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 14:11:57.493952  479115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:11:57.497508  479115 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 13:13 /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:11:57.497576  479115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:11:57.538375  479115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 14:11:57.546677  479115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295174.pem && ln -fs /usr/share/ca-certificates/295174.pem /etc/ssl/certs/295174.pem"
	I1102 14:11:57.554742  479115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295174.pem
	I1102 14:11:57.558119  479115 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 13:20 /usr/share/ca-certificates/295174.pem
	I1102 14:11:57.558180  479115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295174.pem
	I1102 14:11:57.598582  479115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295174.pem /etc/ssl/certs/51391683.0"
	I1102 14:11:57.606879  479115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951742.pem && ln -fs /usr/share/ca-certificates/2951742.pem /etc/ssl/certs/2951742.pem"
	I1102 14:11:57.615144  479115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951742.pem
	I1102 14:11:57.618672  479115 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 13:20 /usr/share/ca-certificates/2951742.pem
	I1102 14:11:57.618784  479115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951742.pem
	I1102 14:11:57.659538  479115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951742.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 14:11:57.667777  479115 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1102 14:11:57.671476  479115 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1102 14:11:57.674859  479115 ssh_runner.go:195] Run: cat /version.json
	I1102 14:11:57.674969  479115 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1102 14:11:57.764204  479115 ssh_runner.go:195] Run: systemctl --version
	I1102 14:11:57.770491  479115 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1102 14:11:57.805257  479115 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1102 14:11:57.809522  479115 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1102 14:11:57.809612  479115 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1102 14:11:57.838588  479115 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1102 14:11:57.838684  479115 start.go:496] detecting cgroup driver to use...
	I1102 14:11:57.838719  479115 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1102 14:11:57.838790  479115 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1102 14:11:57.856106  479115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1102 14:11:57.868703  479115 docker.go:218] disabling cri-docker service (if available) ...
	I1102 14:11:57.868825  479115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1102 14:11:57.886247  479115 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1102 14:11:57.905255  479115 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1102 14:11:58.025091  479115 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1102 14:11:58.146799  479115 docker.go:234] disabling docker service ...
	I1102 14:11:58.146910  479115 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1102 14:11:58.169309  479115 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1102 14:11:58.182236  479115 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1102 14:11:58.294844  479115 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1102 14:11:58.424636  479115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1102 14:11:58.438315  479115 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1102 14:11:58.452691  479115 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1102 14:11:58.452771  479115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:11:58.462298  479115 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1102 14:11:58.462442  479115 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:11:58.472150  479115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:11:58.480865  479115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:11:58.489939  479115 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1102 14:11:58.498274  479115 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:11:58.507482  479115 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:11:58.520880  479115 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:11:58.530106  479115 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1102 14:11:58.537764  479115 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1102 14:11:58.545530  479115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 14:11:58.659542  479115 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1102 14:11:58.788654  479115 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1102 14:11:58.788726  479115 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1102 14:11:58.792706  479115 start.go:564] Will wait 60s for crictl version
	I1102 14:11:58.792774  479115 ssh_runner.go:195] Run: which crictl
	I1102 14:11:58.796380  479115 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1102 14:11:58.821267  479115 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1102 14:11:58.821350  479115 ssh_runner.go:195] Run: crio --version
	I1102 14:11:58.854183  479115 ssh_runner.go:195] Run: crio --version
	I1102 14:11:58.886719  479115 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1102 14:11:58.889445  479115 cli_runner.go:164] Run: docker network inspect no-preload-150469 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 14:11:58.905472  479115 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1102 14:11:58.909335  479115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 14:11:58.918870  479115 kubeadm.go:884] updating cluster {Name:no-preload-150469 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-150469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1102 14:11:58.918982  479115 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 14:11:58.919039  479115 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 14:11:58.948275  479115 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1102 14:11:58.948302  479115 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1102 14:11:58.948384  479115 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 14:11:58.948606  479115 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1102 14:11:58.948698  479115 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1102 14:11:58.948812  479115 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1102 14:11:58.948922  479115 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1102 14:11:58.949018  479115 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1102 14:11:58.949112  479115 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1102 14:11:58.949246  479115 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1102 14:11:58.951008  479115 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1102 14:11:58.951037  479115 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1102 14:11:58.951108  479115 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1102 14:11:58.951230  479115 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 14:11:58.951290  479115 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1102 14:11:58.951449  479115 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1102 14:11:58.951517  479115 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1102 14:11:58.951592  479115 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1102 14:11:59.164334  479115 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1102 14:11:59.190416  479115 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1102 14:11:59.197174  479115 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1102 14:11:59.198446  479115 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1102 14:11:59.199020  479115 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1102 14:11:59.207107  479115 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1102 14:11:59.208961  479115 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd" in container runtime
	I1102 14:11:59.209021  479115 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1102 14:11:59.209341  479115 ssh_runner.go:195] Run: which crictl
	I1102 14:11:59.209522  479115 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1102 14:11:59.253165  479115 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9" in container runtime
	I1102 14:11:59.253226  479115 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1102 14:11:59.253288  479115 ssh_runner.go:195] Run: which crictl
	I1102 14:11:59.323416  479115 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc" in container runtime
	I1102 14:11:59.323484  479115 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1102 14:11:59.323548  479115 ssh_runner.go:195] Run: which crictl
	I1102 14:11:59.326371  479115 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a" in container runtime
	I1102 14:11:59.326412  479115 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1102 14:11:59.326471  479115 ssh_runner.go:195] Run: which crictl
	I1102 14:11:59.349127  479115 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0" in container runtime
	I1102 14:11:59.349473  479115 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1102 14:11:59.349501  479115 ssh_runner.go:195] Run: which crictl
	I1102 14:11:59.349239  479115 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196" in container runtime
	I1102 14:11:59.349569  479115 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1102 14:11:59.349615  479115 ssh_runner.go:195] Run: which crictl
	I1102 14:11:59.349300  479115 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e" in container runtime
	I1102 14:11:59.349682  479115 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1102 14:11:59.349709  479115 ssh_runner.go:195] Run: which crictl
	I1102 14:11:59.349369  479115 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1102 14:11:59.349405  479115 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1102 14:11:59.349423  479115 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1102 14:11:59.349434  479115 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1102 14:11:59.366056  479115 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1102 14:11:59.366169  479115 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1102 14:11:59.419164  479115 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1102 14:11:59.419274  479115 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1102 14:11:59.419463  479115 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1102 14:11:59.443291  479115 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1102 14:11:59.443449  479115 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1102 14:11:59.479656  479115 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1102 14:11:59.479739  479115 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1102 14:11:59.520410  479115 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1102 14:11:59.520607  479115 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1102 14:11:59.520705  479115 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1102 14:11:59.554291  479115 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1102 14:11:59.554411  479115 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1102 14:11:59.573565  479115 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1102 14:11:59.573666  479115 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1102 14:11:59.676408  479115 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1
	I1102 14:11:59.676445  479115 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1
	I1102 14:11:59.676525  479115 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1102 14:11:59.676538  479115 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1102 14:11:59.676604  479115 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1102 14:11:59.676606  479115 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1102 14:11:59.676648  479115 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1102 14:11:59.689865  479115 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1
	I1102 14:11:59.690110  479115 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1102 14:11:59.689964  479115 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1
	I1102 14:11:59.690267  479115 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1102 14:11:59.690013  479115 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1
	I1102 14:11:59.690390  479115 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1102 14:11:59.717392  479115 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1102 14:11:59.717428  479115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (20730880 bytes)
	I1102 14:11:59.717487  479115 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1102 14:11:59.717497  479115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (268288 bytes)
	I1102 14:11:59.717556  479115 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0
	I1102 14:11:59.717625  479115 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1102 14:11:59.717669  479115 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1102 14:11:59.717687  479115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (20402176 bytes)
	I1102 14:11:59.717732  479115 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1102 14:11:59.717742  479115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (15790592 bytes)
	I1102 14:11:59.717777  479115 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1102 14:11:59.717785  479115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (22790144 bytes)
	I1102 14:11:59.717815  479115 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1102 14:11:59.717823  479115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (24581632 bytes)
	I1102 14:11:59.743499  479115 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1102 14:11:59.743584  479115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (98216960 bytes)
	I1102 14:11:59.821533  479115 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1102 14:11:59.821683  479115 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	W1102 14:12:00.312874  479115 image.go:286] image gcr.io/k8s-minikube/storage-provisioner:v5 arch mismatch: want arm64 got amd64. fixing
	I1102 14:12:00.313079  479115 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 14:12:00.517802  479115 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 from cache
	I1102 14:12:00.520203  479115 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51" in container runtime
	I1102 14:12:00.520317  479115 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 14:12:00.520419  479115 ssh_runner.go:195] Run: which crictl
	I1102 14:12:00.580558  479115 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1102 14:12:00.580652  479115 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1102 14:12:00.604196  479115 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 14:12:02.272900  479115 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.668621484s)
	I1102 14:12:02.272977  479115 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 14:12:02.273049  479115 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.692383773s)
	I1102 14:12:02.273065  479115 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1102 14:12:02.273083  479115 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1102 14:12:02.273125  479115 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1102 14:12:02.304064  479115 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 14:12:03.929254  479115 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.656104261s)
	I1102 14:12:03.929284  479115 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1102 14:12:03.929303  479115 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1102 14:12:03.929361  479115 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1102 14:12:03.929373  479115 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.625281883s)
	I1102 14:12:03.929415  479115 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1102 14:12:03.929493  479115 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1102 14:12:05.109537  479115 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.180022987s)
	I1102 14:12:05.109571  479115 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1102 14:12:05.109605  479115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1102 14:12:05.109757  479115 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.180382489s)
	I1102 14:12:05.109773  479115 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1102 14:12:05.109793  479115 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1102 14:12:05.109837  479115 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1102 14:12:06.514173  479115 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.404310031s)
	I1102 14:12:06.514198  479115 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1102 14:12:06.514216  479115 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1102 14:12:06.514263  479115 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1102 14:12:07.879720  479115 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.365427342s)
	I1102 14:12:07.879753  479115 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1102 14:12:07.879776  479115 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1102 14:12:07.879823  479115 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1102 14:12:11.362697  479115 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.482845506s)
	I1102 14:12:11.362724  479115 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1102 14:12:11.362747  479115 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1102 14:12:11.362800  479115 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1102 14:12:11.944560  479115 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21808-293314/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1102 14:12:11.944599  479115 cache_images.go:125] Successfully loaded all cached images
	I1102 14:12:11.944606  479115 cache_images.go:94] duration metric: took 12.996290618s to LoadCachedImages
	I1102 14:12:11.944618  479115 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1102 14:12:11.944723  479115 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-150469 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-150469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1102 14:12:11.944802  479115 ssh_runner.go:195] Run: crio config
	I1102 14:12:11.999677  479115 cni.go:84] Creating CNI manager for ""
	I1102 14:12:11.999698  479115 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 14:12:11.999726  479115 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1102 14:12:11.999756  479115 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-150469 NodeName:no-preload-150469 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1102 14:12:11.999872  479115 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-150469"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1102 14:12:11.999944  479115 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1102 14:12:12.012714  479115 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1102 14:12:12.012793  479115 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1102 14:12:12.021666  479115 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21808-293314/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1102 14:12:12.022134  479115 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21808-293314/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1102 14:12:12.022385  479115 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1102 14:12:12.022484  479115 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1102 14:12:12.026885  479115 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1102 14:12:12.026926  479115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1102 14:12:12.809554  479115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 14:12:12.829864  479115 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1102 14:12:12.835067  479115 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1102 14:12:12.835108  479115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1102 14:12:13.005576  479115 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1102 14:12:13.023080  479115 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1102 14:12:13.023121  479115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1102 14:12:13.469933  479115 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1102 14:12:13.478016  479115 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1102 14:12:13.491876  479115 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1102 14:12:13.505688  479115 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1102 14:12:13.519317  479115 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1102 14:12:13.523001  479115 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 14:12:13.533137  479115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 14:12:13.657070  479115 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 14:12:13.676561  479115 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469 for IP: 192.168.76.2
	I1102 14:12:13.676580  479115 certs.go:195] generating shared ca certs ...
	I1102 14:12:13.676596  479115 certs.go:227] acquiring lock for ca certs: {Name:mkead50075949a3cdc798f9c0149a2bc2638cbbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:12:13.676792  479115 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-293314/.minikube/ca.key
	I1102 14:12:13.676860  479115 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.key
	I1102 14:12:13.676876  479115 certs.go:257] generating profile certs ...
	I1102 14:12:13.676952  479115 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/client.key
	I1102 14:12:13.676973  479115 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/client.crt with IP's: []
	I1102 14:12:14.219870  479115 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/client.crt ...
	I1102 14:12:14.219905  479115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/client.crt: {Name:mk7e5d4ef989a0e3cbb2cc9ec626ef4ddfa28502 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:12:14.220115  479115 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/client.key ...
	I1102 14:12:14.220130  479115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/client.key: {Name:mk14dbef3ff742c0625a1bc2ff5a5234940d6c7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:12:14.220222  479115 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/apiserver.key.bb275f49
	I1102 14:12:14.220239  479115 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/apiserver.crt.bb275f49 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1102 14:12:14.912907  479115 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/apiserver.crt.bb275f49 ...
	I1102 14:12:14.912940  479115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/apiserver.crt.bb275f49: {Name:mkdacb01d0fdbde35561bcd778f239da5ba905ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:12:14.913131  479115 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/apiserver.key.bb275f49 ...
	I1102 14:12:14.913146  479115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/apiserver.key.bb275f49: {Name:mk1001ae74581dfedcfaab187b260fe013c902b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:12:14.913241  479115 certs.go:382] copying /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/apiserver.crt.bb275f49 -> /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/apiserver.crt
	I1102 14:12:14.913326  479115 certs.go:386] copying /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/apiserver.key.bb275f49 -> /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/apiserver.key
	I1102 14:12:14.913388  479115 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/proxy-client.key
	I1102 14:12:14.913406  479115 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/proxy-client.crt with IP's: []
	I1102 14:12:15.552462  479115 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/proxy-client.crt ...
	I1102 14:12:15.552495  479115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/proxy-client.crt: {Name:mk9b0b0a0f25d809b3eea72c37c4c179db33b934 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:12:15.552688  479115 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/proxy-client.key ...
	I1102 14:12:15.552705  479115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/proxy-client.key: {Name:mk060544aaac23b9dc51119f393a45cd58aa447c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:12:15.552904  479115 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem (1338 bytes)
	W1102 14:12:15.552949  479115 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174_empty.pem, impossibly tiny 0 bytes
	I1102 14:12:15.552963  479115 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem (1679 bytes)
	I1102 14:12:15.552990  479115 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem (1082 bytes)
	I1102 14:12:15.553020  479115 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem (1123 bytes)
	I1102 14:12:15.553046  479115 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem (1675 bytes)
	I1102 14:12:15.553096  479115 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem (1708 bytes)
	I1102 14:12:15.553729  479115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1102 14:12:15.573177  479115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1102 14:12:15.593382  479115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1102 14:12:15.612281  479115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1102 14:12:15.630733  479115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1102 14:12:15.648674  479115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1102 14:12:15.666707  479115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1102 14:12:15.684573  479115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1102 14:12:15.703310  479115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem --> /usr/share/ca-certificates/2951742.pem (1708 bytes)
	I1102 14:12:15.720920  479115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 14:12:15.738272  479115 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem --> /usr/share/ca-certificates/295174.pem (1338 bytes)
	I1102 14:12:15.756467  479115 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1102 14:12:15.769417  479115 ssh_runner.go:195] Run: openssl version
	I1102 14:12:15.775939  479115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295174.pem && ln -fs /usr/share/ca-certificates/295174.pem /etc/ssl/certs/295174.pem"
	I1102 14:12:15.785055  479115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295174.pem
	I1102 14:12:15.789059  479115 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 13:20 /usr/share/ca-certificates/295174.pem
	I1102 14:12:15.789167  479115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295174.pem
	I1102 14:12:15.830514  479115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295174.pem /etc/ssl/certs/51391683.0"
	I1102 14:12:15.838556  479115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951742.pem && ln -fs /usr/share/ca-certificates/2951742.pem /etc/ssl/certs/2951742.pem"
	I1102 14:12:15.846784  479115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951742.pem
	I1102 14:12:15.850437  479115 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 13:20 /usr/share/ca-certificates/2951742.pem
	I1102 14:12:15.850524  479115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951742.pem
	I1102 14:12:15.891495  479115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951742.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 14:12:15.900664  479115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 14:12:15.909990  479115 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:12:15.918605  479115 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 13:13 /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:12:15.918679  479115 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:12:15.960006  479115 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 14:12:15.968327  479115 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1102 14:12:15.972237  479115 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1102 14:12:15.972314  479115 kubeadm.go:401] StartCluster: {Name:no-preload-150469 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-150469 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 14:12:15.972411  479115 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 14:12:15.972477  479115 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 14:12:16.000508  479115 cri.go:89] found id: ""
	I1102 14:12:16.000648  479115 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1102 14:12:16.026044  479115 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1102 14:12:16.035052  479115 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1102 14:12:16.035196  479115 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1102 14:12:16.043788  479115 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1102 14:12:16.043828  479115 kubeadm.go:158] found existing configuration files:
	
	I1102 14:12:16.043904  479115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1102 14:12:16.052483  479115 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1102 14:12:16.052609  479115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1102 14:12:16.061539  479115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1102 14:12:16.069990  479115 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1102 14:12:16.070084  479115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1102 14:12:16.078544  479115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1102 14:12:16.087323  479115 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1102 14:12:16.087395  479115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1102 14:12:16.095249  479115 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1102 14:12:16.103653  479115 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1102 14:12:16.103747  479115 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1102 14:12:16.113715  479115 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1102 14:12:16.151591  479115 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1102 14:12:16.152015  479115 kubeadm.go:319] [preflight] Running pre-flight checks
	I1102 14:12:16.175046  479115 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1102 14:12:16.175194  479115 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1102 14:12:16.175255  479115 kubeadm.go:319] OS: Linux
	I1102 14:12:16.175330  479115 kubeadm.go:319] CGROUPS_CPU: enabled
	I1102 14:12:16.175409  479115 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1102 14:12:16.175482  479115 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1102 14:12:16.175556  479115 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1102 14:12:16.175635  479115 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1102 14:12:16.175748  479115 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1102 14:12:16.175830  479115 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1102 14:12:16.175913  479115 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1102 14:12:16.175994  479115 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1102 14:12:16.251197  479115 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1102 14:12:16.251357  479115 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1102 14:12:16.251480  479115 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1102 14:12:16.270097  479115 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1102 14:12:16.278651  479115 out.go:252]   - Generating certificates and keys ...
	I1102 14:12:16.278763  479115 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1102 14:12:16.278838  479115 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1102 14:12:16.821669  479115 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1102 14:12:17.044552  479115 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1102 14:12:17.159429  479115 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1102 14:12:17.425531  479115 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1102 14:12:18.509534  479115 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1102 14:12:18.509943  479115 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-150469] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1102 14:12:18.874212  479115 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1102 14:12:18.874546  479115 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-150469] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1102 14:12:19.431652  479115 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1102 14:12:20.018007  479115 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1102 14:12:20.885921  479115 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1102 14:12:20.886219  479115 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1102 14:12:21.625993  479115 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1102 14:12:21.773420  479115 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1102 14:12:22.237794  479115 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1102 14:12:22.721647  479115 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1102 14:12:23.089149  479115 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1102 14:12:23.089829  479115 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1102 14:12:23.092462  479115 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1102 14:12:23.097465  479115 out.go:252]   - Booting up control plane ...
	I1102 14:12:23.097581  479115 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1102 14:12:23.097664  479115 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1102 14:12:23.097733  479115 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1102 14:12:23.113074  479115 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1102 14:12:23.113192  479115 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1102 14:12:23.121485  479115 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1102 14:12:23.121798  479115 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1102 14:12:23.121846  479115 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1102 14:12:23.251053  479115 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1102 14:12:23.251178  479115 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1102 14:12:24.751030  479115 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501647708s
	I1102 14:12:24.754310  479115 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1102 14:12:24.754406  479115 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1102 14:12:24.754726  479115 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1102 14:12:24.754825  479115 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1102 14:12:28.116409  479115 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.361665716s
	I1102 14:12:28.685258  479115 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.930869158s
	I1102 14:12:30.757567  479115 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.003138285s
	I1102 14:12:30.777652  479115 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1102 14:12:30.793251  479115 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1102 14:12:30.811375  479115 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1102 14:12:30.811584  479115 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-150469 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1102 14:12:30.823654  479115 kubeadm.go:319] [bootstrap-token] Using token: w96hab.1p3c0mnfnqln7niu
	I1102 14:12:30.826693  479115 out.go:252]   - Configuring RBAC rules ...
	I1102 14:12:30.826831  479115 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1102 14:12:30.830684  479115 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1102 14:12:30.841190  479115 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1102 14:12:30.845243  479115 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1102 14:12:30.849644  479115 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1102 14:12:30.853848  479115 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1102 14:12:31.164591  479115 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1102 14:12:31.612096  479115 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1102 14:12:32.164574  479115 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1102 14:12:32.165732  479115 kubeadm.go:319] 
	I1102 14:12:32.165819  479115 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1102 14:12:32.165831  479115 kubeadm.go:319] 
	I1102 14:12:32.165919  479115 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1102 14:12:32.165931  479115 kubeadm.go:319] 
	I1102 14:12:32.165958  479115 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1102 14:12:32.166025  479115 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1102 14:12:32.166081  479115 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1102 14:12:32.166090  479115 kubeadm.go:319] 
	I1102 14:12:32.166146  479115 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1102 14:12:32.166154  479115 kubeadm.go:319] 
	I1102 14:12:32.166205  479115 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1102 14:12:32.166214  479115 kubeadm.go:319] 
	I1102 14:12:32.166268  479115 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1102 14:12:32.166355  479115 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1102 14:12:32.166430  479115 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1102 14:12:32.166439  479115 kubeadm.go:319] 
	I1102 14:12:32.166527  479115 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1102 14:12:32.166636  479115 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1102 14:12:32.166646  479115 kubeadm.go:319] 
	I1102 14:12:32.166734  479115 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token w96hab.1p3c0mnfnqln7niu \
	I1102 14:12:32.166848  479115 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:bd4a1f3bddc85f3fc83315ad33165a30aa1cba7ce55898ef9dad8dcc7e8d0eec \
	I1102 14:12:32.166873  479115 kubeadm.go:319] 	--control-plane 
	I1102 14:12:32.166877  479115 kubeadm.go:319] 
	I1102 14:12:32.166975  479115 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1102 14:12:32.166980  479115 kubeadm.go:319] 
	I1102 14:12:32.167066  479115 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token w96hab.1p3c0mnfnqln7niu \
	I1102 14:12:32.167172  479115 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:bd4a1f3bddc85f3fc83315ad33165a30aa1cba7ce55898ef9dad8dcc7e8d0eec 
	I1102 14:12:32.170511  479115 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1102 14:12:32.170773  479115 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1102 14:12:32.170915  479115 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1102 14:12:32.170944  479115 cni.go:84] Creating CNI manager for ""
	I1102 14:12:32.170954  479115 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 14:12:32.175981  479115 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1102 14:12:32.178910  479115 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1102 14:12:32.183334  479115 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1102 14:12:32.183400  479115 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1102 14:12:32.196423  479115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1102 14:12:32.478343  479115 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1102 14:12:32.478475  479115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:12:32.478550  479115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-150469 minikube.k8s.io/updated_at=2025_11_02T14_12_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a minikube.k8s.io/name=no-preload-150469 minikube.k8s.io/primary=true
	I1102 14:12:32.492344  479115 ops.go:34] apiserver oom_adj: -16
	I1102 14:12:32.622112  479115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:12:33.122838  479115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:12:33.622472  479115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:12:34.122251  479115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:12:34.622864  479115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:12:35.122182  479115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:12:35.622548  479115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:12:36.122940  479115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:12:36.623168  479115 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:12:36.736479  479115 kubeadm.go:1114] duration metric: took 4.258050589s to wait for elevateKubeSystemPrivileges
	I1102 14:12:36.736521  479115 kubeadm.go:403] duration metric: took 20.764210573s to StartCluster
	I1102 14:12:36.736560  479115 settings.go:142] acquiring lock: {Name:mk95f66b3b15e63f58f8c9085c1ffe67cc396dc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:12:36.736672  479115 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-293314/kubeconfig
	I1102 14:12:36.737827  479115 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/kubeconfig: {Name:mke5a65554da8fc0fd6a2ea60bed899d5b38ce09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:12:36.738161  479115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1102 14:12:36.738436  479115 config.go:182] Loaded profile config "no-preload-150469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:12:36.738579  479115 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1102 14:12:36.738689  479115 addons.go:70] Setting storage-provisioner=true in profile "no-preload-150469"
	I1102 14:12:36.738711  479115 addons.go:239] Setting addon storage-provisioner=true in "no-preload-150469"
	I1102 14:12:36.738735  479115 host.go:66] Checking if "no-preload-150469" exists ...
	I1102 14:12:36.739238  479115 cli_runner.go:164] Run: docker container inspect no-preload-150469 --format={{.State.Status}}
	I1102 14:12:36.739398  479115 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 14:12:36.739764  479115 addons.go:70] Setting default-storageclass=true in profile "no-preload-150469"
	I1102 14:12:36.739784  479115 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-150469"
	I1102 14:12:36.740068  479115 cli_runner.go:164] Run: docker container inspect no-preload-150469 --format={{.State.Status}}
	I1102 14:12:36.743183  479115 out.go:179] * Verifying Kubernetes components...
	I1102 14:12:36.746172  479115 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 14:12:36.780729  479115 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 14:12:36.785577  479115 addons.go:239] Setting addon default-storageclass=true in "no-preload-150469"
	I1102 14:12:36.785622  479115 host.go:66] Checking if "no-preload-150469" exists ...
	I1102 14:12:36.786058  479115 cli_runner.go:164] Run: docker container inspect no-preload-150469 --format={{.State.Status}}
	I1102 14:12:36.787030  479115 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 14:12:36.787058  479115 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1102 14:12:36.787109  479115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-150469
	I1102 14:12:36.817746  479115 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1102 14:12:36.817768  479115 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1102 14:12:36.817835  479115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-150469
	I1102 14:12:36.829808  479115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33431 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/no-preload-150469/id_rsa Username:docker}
	I1102 14:12:36.867902  479115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33431 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/no-preload-150469/id_rsa Username:docker}
	I1102 14:12:37.111037  479115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 14:12:37.158672  479115 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1102 14:12:37.196980  479115 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 14:12:37.197359  479115 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1102 14:12:38.080107  479115 start.go:1013] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1102 14:12:38.080169  479115 node_ready.go:35] waiting up to 6m0s for node "no-preload-150469" to be "Ready" ...
	I1102 14:12:38.135717  479115 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1102 14:12:38.139474  479115 addons.go:515] duration metric: took 1.400873669s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1102 14:12:38.585787  479115 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-150469" context rescaled to 1 replicas
	W1102 14:12:40.084046  479115 node_ready.go:57] node "no-preload-150469" has "Ready":"False" status (will retry)
	W1102 14:12:42.084895  479115 node_ready.go:57] node "no-preload-150469" has "Ready":"False" status (will retry)
	W1102 14:12:44.583936  479115 node_ready.go:57] node "no-preload-150469" has "Ready":"False" status (will retry)
	W1102 14:12:46.584845  479115 node_ready.go:57] node "no-preload-150469" has "Ready":"False" status (will retry)
	W1102 14:12:49.083082  479115 node_ready.go:57] node "no-preload-150469" has "Ready":"False" status (will retry)
	I1102 14:12:50.583888  479115 node_ready.go:49] node "no-preload-150469" is "Ready"
	I1102 14:12:50.583923  479115 node_ready.go:38] duration metric: took 12.503735111s for node "no-preload-150469" to be "Ready" ...
	I1102 14:12:50.583940  479115 api_server.go:52] waiting for apiserver process to appear ...
	I1102 14:12:50.584000  479115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 14:12:50.595561  479115 api_server.go:72] duration metric: took 13.856120349s to wait for apiserver process to appear ...
	I1102 14:12:50.595588  479115 api_server.go:88] waiting for apiserver healthz status ...
	I1102 14:12:50.595609  479115 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1102 14:12:50.603610  479115 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1102 14:12:50.604661  479115 api_server.go:141] control plane version: v1.34.1
	I1102 14:12:50.604684  479115 api_server.go:131] duration metric: took 9.089385ms to wait for apiserver health ...
	I1102 14:12:50.604694  479115 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 14:12:50.608005  479115 system_pods.go:59] 8 kube-system pods found
	I1102 14:12:50.608042  479115 system_pods.go:61] "coredns-66bc5c9577-wkgrq" [90029150-f8ff-484e-a449-fa19206ab6b2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 14:12:50.608048  479115 system_pods.go:61] "etcd-no-preload-150469" [a653ec43-d807-4f0b-8111-507615a63de3] Running
	I1102 14:12:50.608055  479115 system_pods.go:61] "kindnet-vm84g" [31a16d1f-9be1-46bb-a911-452fc3e27389] Running
	I1102 14:12:50.608059  479115 system_pods.go:61] "kube-apiserver-no-preload-150469" [9668b04c-ce5d-4558-b3d1-3ddb2d40c8af] Running
	I1102 14:12:50.608064  479115 system_pods.go:61] "kube-controller-manager-no-preload-150469" [5cf5e00d-7b35-495f-b5f2-6c149ee77125] Running
	I1102 14:12:50.608069  479115 system_pods.go:61] "kube-proxy-qg9np" [28814102-d017-4d78-8904-c21855b52264] Running
	I1102 14:12:50.608073  479115 system_pods.go:61] "kube-scheduler-no-preload-150469" [5785a754-7fb9-46fd-865a-2756355ba605] Running
	I1102 14:12:50.608085  479115 system_pods.go:61] "storage-provisioner" [bb34ac47-56c9-416b-944a-90dc162bf553] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 14:12:50.608093  479115 system_pods.go:74] duration metric: took 3.394751ms to wait for pod list to return data ...
	I1102 14:12:50.608111  479115 default_sa.go:34] waiting for default service account to be created ...
	I1102 14:12:50.610508  479115 default_sa.go:45] found service account: "default"
	I1102 14:12:50.610530  479115 default_sa.go:55] duration metric: took 2.412171ms for default service account to be created ...
	I1102 14:12:50.610540  479115 system_pods.go:116] waiting for k8s-apps to be running ...
	I1102 14:12:50.613143  479115 system_pods.go:86] 8 kube-system pods found
	I1102 14:12:50.613173  479115 system_pods.go:89] "coredns-66bc5c9577-wkgrq" [90029150-f8ff-484e-a449-fa19206ab6b2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 14:12:50.613180  479115 system_pods.go:89] "etcd-no-preload-150469" [a653ec43-d807-4f0b-8111-507615a63de3] Running
	I1102 14:12:50.613188  479115 system_pods.go:89] "kindnet-vm84g" [31a16d1f-9be1-46bb-a911-452fc3e27389] Running
	I1102 14:12:50.613192  479115 system_pods.go:89] "kube-apiserver-no-preload-150469" [9668b04c-ce5d-4558-b3d1-3ddb2d40c8af] Running
	I1102 14:12:50.613197  479115 system_pods.go:89] "kube-controller-manager-no-preload-150469" [5cf5e00d-7b35-495f-b5f2-6c149ee77125] Running
	I1102 14:12:50.613201  479115 system_pods.go:89] "kube-proxy-qg9np" [28814102-d017-4d78-8904-c21855b52264] Running
	I1102 14:12:50.613204  479115 system_pods.go:89] "kube-scheduler-no-preload-150469" [5785a754-7fb9-46fd-865a-2756355ba605] Running
	I1102 14:12:50.613211  479115 system_pods.go:89] "storage-provisioner" [bb34ac47-56c9-416b-944a-90dc162bf553] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 14:12:50.613229  479115 retry.go:31] will retry after 276.662843ms: missing components: kube-dns
	I1102 14:12:50.908021  479115 system_pods.go:86] 8 kube-system pods found
	I1102 14:12:50.908062  479115 system_pods.go:89] "coredns-66bc5c9577-wkgrq" [90029150-f8ff-484e-a449-fa19206ab6b2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 14:12:50.908069  479115 system_pods.go:89] "etcd-no-preload-150469" [a653ec43-d807-4f0b-8111-507615a63de3] Running
	I1102 14:12:50.908076  479115 system_pods.go:89] "kindnet-vm84g" [31a16d1f-9be1-46bb-a911-452fc3e27389] Running
	I1102 14:12:50.908081  479115 system_pods.go:89] "kube-apiserver-no-preload-150469" [9668b04c-ce5d-4558-b3d1-3ddb2d40c8af] Running
	I1102 14:12:50.908086  479115 system_pods.go:89] "kube-controller-manager-no-preload-150469" [5cf5e00d-7b35-495f-b5f2-6c149ee77125] Running
	I1102 14:12:50.908090  479115 system_pods.go:89] "kube-proxy-qg9np" [28814102-d017-4d78-8904-c21855b52264] Running
	I1102 14:12:50.908094  479115 system_pods.go:89] "kube-scheduler-no-preload-150469" [5785a754-7fb9-46fd-865a-2756355ba605] Running
	I1102 14:12:50.908100  479115 system_pods.go:89] "storage-provisioner" [bb34ac47-56c9-416b-944a-90dc162bf553] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 14:12:50.908119  479115 retry.go:31] will retry after 269.961504ms: missing components: kube-dns
	I1102 14:12:51.183163  479115 system_pods.go:86] 8 kube-system pods found
	I1102 14:12:51.183247  479115 system_pods.go:89] "coredns-66bc5c9577-wkgrq" [90029150-f8ff-484e-a449-fa19206ab6b2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 14:12:51.183270  479115 system_pods.go:89] "etcd-no-preload-150469" [a653ec43-d807-4f0b-8111-507615a63de3] Running
	I1102 14:12:51.183294  479115 system_pods.go:89] "kindnet-vm84g" [31a16d1f-9be1-46bb-a911-452fc3e27389] Running
	I1102 14:12:51.183332  479115 system_pods.go:89] "kube-apiserver-no-preload-150469" [9668b04c-ce5d-4558-b3d1-3ddb2d40c8af] Running
	I1102 14:12:51.183351  479115 system_pods.go:89] "kube-controller-manager-no-preload-150469" [5cf5e00d-7b35-495f-b5f2-6c149ee77125] Running
	I1102 14:12:51.183372  479115 system_pods.go:89] "kube-proxy-qg9np" [28814102-d017-4d78-8904-c21855b52264] Running
	I1102 14:12:51.183409  479115 system_pods.go:89] "kube-scheduler-no-preload-150469" [5785a754-7fb9-46fd-865a-2756355ba605] Running
	I1102 14:12:51.183435  479115 system_pods.go:89] "storage-provisioner" [bb34ac47-56c9-416b-944a-90dc162bf553] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 14:12:51.183469  479115 retry.go:31] will retry after 437.586607ms: missing components: kube-dns
	I1102 14:12:51.624831  479115 system_pods.go:86] 8 kube-system pods found
	I1102 14:12:51.624868  479115 system_pods.go:89] "coredns-66bc5c9577-wkgrq" [90029150-f8ff-484e-a449-fa19206ab6b2] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 14:12:51.624875  479115 system_pods.go:89] "etcd-no-preload-150469" [a653ec43-d807-4f0b-8111-507615a63de3] Running
	I1102 14:12:51.624881  479115 system_pods.go:89] "kindnet-vm84g" [31a16d1f-9be1-46bb-a911-452fc3e27389] Running
	I1102 14:12:51.624886  479115 system_pods.go:89] "kube-apiserver-no-preload-150469" [9668b04c-ce5d-4558-b3d1-3ddb2d40c8af] Running
	I1102 14:12:51.624891  479115 system_pods.go:89] "kube-controller-manager-no-preload-150469" [5cf5e00d-7b35-495f-b5f2-6c149ee77125] Running
	I1102 14:12:51.624895  479115 system_pods.go:89] "kube-proxy-qg9np" [28814102-d017-4d78-8904-c21855b52264] Running
	I1102 14:12:51.624899  479115 system_pods.go:89] "kube-scheduler-no-preload-150469" [5785a754-7fb9-46fd-865a-2756355ba605] Running
	I1102 14:12:51.624905  479115 system_pods.go:89] "storage-provisioner" [bb34ac47-56c9-416b-944a-90dc162bf553] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 14:12:51.624927  479115 retry.go:31] will retry after 556.210046ms: missing components: kube-dns
	I1102 14:12:52.185105  479115 system_pods.go:86] 8 kube-system pods found
	I1102 14:12:52.185140  479115 system_pods.go:89] "coredns-66bc5c9577-wkgrq" [90029150-f8ff-484e-a449-fa19206ab6b2] Running
	I1102 14:12:52.185147  479115 system_pods.go:89] "etcd-no-preload-150469" [a653ec43-d807-4f0b-8111-507615a63de3] Running
	I1102 14:12:52.185152  479115 system_pods.go:89] "kindnet-vm84g" [31a16d1f-9be1-46bb-a911-452fc3e27389] Running
	I1102 14:12:52.185156  479115 system_pods.go:89] "kube-apiserver-no-preload-150469" [9668b04c-ce5d-4558-b3d1-3ddb2d40c8af] Running
	I1102 14:12:52.185160  479115 system_pods.go:89] "kube-controller-manager-no-preload-150469" [5cf5e00d-7b35-495f-b5f2-6c149ee77125] Running
	I1102 14:12:52.185164  479115 system_pods.go:89] "kube-proxy-qg9np" [28814102-d017-4d78-8904-c21855b52264] Running
	I1102 14:12:52.185168  479115 system_pods.go:89] "kube-scheduler-no-preload-150469" [5785a754-7fb9-46fd-865a-2756355ba605] Running
	I1102 14:12:52.185172  479115 system_pods.go:89] "storage-provisioner" [bb34ac47-56c9-416b-944a-90dc162bf553] Running
	I1102 14:12:52.185198  479115 system_pods.go:126] duration metric: took 1.5746523s to wait for k8s-apps to be running ...
	I1102 14:12:52.185214  479115 system_svc.go:44] waiting for kubelet service to be running ....
	I1102 14:12:52.185275  479115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 14:12:52.198050  479115 system_svc.go:56] duration metric: took 12.82662ms WaitForService to wait for kubelet
	I1102 14:12:52.198080  479115 kubeadm.go:587] duration metric: took 15.458643844s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 14:12:52.198099  479115 node_conditions.go:102] verifying NodePressure condition ...
	I1102 14:12:52.201190  479115 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1102 14:12:52.201224  479115 node_conditions.go:123] node cpu capacity is 2
	I1102 14:12:52.201241  479115 node_conditions.go:105] duration metric: took 3.134154ms to run NodePressure ...
	I1102 14:12:52.201254  479115 start.go:242] waiting for startup goroutines ...
	I1102 14:12:52.201262  479115 start.go:247] waiting for cluster config update ...
	I1102 14:12:52.201277  479115 start.go:256] writing updated cluster config ...
	I1102 14:12:52.201574  479115 ssh_runner.go:195] Run: rm -f paused
	I1102 14:12:52.205349  479115 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 14:12:52.208833  479115 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wkgrq" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:12:52.213021  479115 pod_ready.go:94] pod "coredns-66bc5c9577-wkgrq" is "Ready"
	I1102 14:12:52.213047  479115 pod_ready.go:86] duration metric: took 4.188882ms for pod "coredns-66bc5c9577-wkgrq" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:12:52.215410  479115 pod_ready.go:83] waiting for pod "etcd-no-preload-150469" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:12:52.219679  479115 pod_ready.go:94] pod "etcd-no-preload-150469" is "Ready"
	I1102 14:12:52.219709  479115 pod_ready.go:86] duration metric: took 4.273806ms for pod "etcd-no-preload-150469" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:12:52.222128  479115 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-150469" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:12:52.232560  479115 pod_ready.go:94] pod "kube-apiserver-no-preload-150469" is "Ready"
	I1102 14:12:52.232584  479115 pod_ready.go:86] duration metric: took 10.432148ms for pod "kube-apiserver-no-preload-150469" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:12:52.235577  479115 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-150469" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:12:52.609505  479115 pod_ready.go:94] pod "kube-controller-manager-no-preload-150469" is "Ready"
	I1102 14:12:52.609536  479115 pod_ready.go:86] duration metric: took 373.930832ms for pod "kube-controller-manager-no-preload-150469" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:12:52.810191  479115 pod_ready.go:83] waiting for pod "kube-proxy-qg9np" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:12:53.209093  479115 pod_ready.go:94] pod "kube-proxy-qg9np" is "Ready"
	I1102 14:12:53.209123  479115 pod_ready.go:86] duration metric: took 398.863282ms for pod "kube-proxy-qg9np" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:12:53.409669  479115 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-150469" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:12:53.809185  479115 pod_ready.go:94] pod "kube-scheduler-no-preload-150469" is "Ready"
	I1102 14:12:53.809224  479115 pod_ready.go:86] duration metric: took 399.524583ms for pod "kube-scheduler-no-preload-150469" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:12:53.809237  479115 pod_ready.go:40] duration metric: took 1.603857856s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 14:12:53.868021  479115 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1102 14:12:53.871620  479115 out.go:179] * Done! kubectl is now configured to use "no-preload-150469" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 02 14:12:51 no-preload-150469 crio[871]: time="2025-11-02T14:12:51.074112905Z" level=info msg="Created container d384a5e3ad27ee78ea85b30e407f8e38c6d3a03814c6427593e0a811e9038457: kube-system/coredns-66bc5c9577-wkgrq/coredns" id=2f127089-4f29-4483-a2c4-3f09b3a3df61 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:12:51 no-preload-150469 crio[871]: time="2025-11-02T14:12:51.075143018Z" level=info msg="Starting container: d384a5e3ad27ee78ea85b30e407f8e38c6d3a03814c6427593e0a811e9038457" id=99a26d16-2459-4081-9af0-eda1af65c859 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 14:12:51 no-preload-150469 crio[871]: time="2025-11-02T14:12:51.085941388Z" level=info msg="Started container" PID=2510 containerID=d384a5e3ad27ee78ea85b30e407f8e38c6d3a03814c6427593e0a811e9038457 description=kube-system/coredns-66bc5c9577-wkgrq/coredns id=99a26d16-2459-4081-9af0-eda1af65c859 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f77c2a1ec42f2ee223e703787554fe9393682fdbbfd6a23850d70e4491622847
	Nov 02 14:12:54 no-preload-150469 crio[871]: time="2025-11-02T14:12:54.373687412Z" level=info msg="Running pod sandbox: default/busybox/POD" id=9e430cbf-a9d7-44d9-af17-78ff8a54da4c name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 14:12:54 no-preload-150469 crio[871]: time="2025-11-02T14:12:54.373755572Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:12:54 no-preload-150469 crio[871]: time="2025-11-02T14:12:54.379037838Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:e526f78955f35b72aa234977e072a9d98e9f4bc0b516a1a80cbbd0cb38b8dbca UID:e212db07-4bb6-4dba-8d5f-2fd867c01398 NetNS:/var/run/netns/094f1944-6db0-496f-b697-278a17c82146 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079ee0}] Aliases:map[]}"
	Nov 02 14:12:54 no-preload-150469 crio[871]: time="2025-11-02T14:12:54.379080201Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 02 14:12:54 no-preload-150469 crio[871]: time="2025-11-02T14:12:54.391103887Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:e526f78955f35b72aa234977e072a9d98e9f4bc0b516a1a80cbbd0cb38b8dbca UID:e212db07-4bb6-4dba-8d5f-2fd867c01398 NetNS:/var/run/netns/094f1944-6db0-496f-b697-278a17c82146 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079ee0}] Aliases:map[]}"
	Nov 02 14:12:54 no-preload-150469 crio[871]: time="2025-11-02T14:12:54.39127947Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 02 14:12:54 no-preload-150469 crio[871]: time="2025-11-02T14:12:54.394194767Z" level=info msg="Ran pod sandbox e526f78955f35b72aa234977e072a9d98e9f4bc0b516a1a80cbbd0cb38b8dbca with infra container: default/busybox/POD" id=9e430cbf-a9d7-44d9-af17-78ff8a54da4c name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 14:12:54 no-preload-150469 crio[871]: time="2025-11-02T14:12:54.395428294Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=740529cb-9474-4479-9c16-b69bcc2ed5d9 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:12:54 no-preload-150469 crio[871]: time="2025-11-02T14:12:54.395565666Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=740529cb-9474-4479-9c16-b69bcc2ed5d9 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:12:54 no-preload-150469 crio[871]: time="2025-11-02T14:12:54.395611172Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=740529cb-9474-4479-9c16-b69bcc2ed5d9 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:12:54 no-preload-150469 crio[871]: time="2025-11-02T14:12:54.397153346Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e32afd78-882f-4eb7-968c-e5ff60bc991d name=/runtime.v1.ImageService/PullImage
	Nov 02 14:12:54 no-preload-150469 crio[871]: time="2025-11-02T14:12:54.399420704Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 02 14:12:56 no-preload-150469 crio[871]: time="2025-11-02T14:12:56.67491458Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=e32afd78-882f-4eb7-968c-e5ff60bc991d name=/runtime.v1.ImageService/PullImage
	Nov 02 14:12:56 no-preload-150469 crio[871]: time="2025-11-02T14:12:56.675559081Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=743962ed-860b-4c1a-b0e4-86a73a7dc002 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:12:56 no-preload-150469 crio[871]: time="2025-11-02T14:12:56.678950033Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4b555775-ad02-45b3-9fb4-97789fde9c7a name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:12:56 no-preload-150469 crio[871]: time="2025-11-02T14:12:56.684301198Z" level=info msg="Creating container: default/busybox/busybox" id=dce4e5f5-6c11-4d7c-abbf-5107634ac8f8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:12:56 no-preload-150469 crio[871]: time="2025-11-02T14:12:56.684430364Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:12:56 no-preload-150469 crio[871]: time="2025-11-02T14:12:56.690973629Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:12:56 no-preload-150469 crio[871]: time="2025-11-02T14:12:56.691589814Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:12:56 no-preload-150469 crio[871]: time="2025-11-02T14:12:56.712240161Z" level=info msg="Created container 9b0e36613fe705e73b2fc48b60b2667f3b3b1829e9821370ff0b0af0696c4725: default/busybox/busybox" id=dce4e5f5-6c11-4d7c-abbf-5107634ac8f8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:12:56 no-preload-150469 crio[871]: time="2025-11-02T14:12:56.715075901Z" level=info msg="Starting container: 9b0e36613fe705e73b2fc48b60b2667f3b3b1829e9821370ff0b0af0696c4725" id=4df97c15-82d6-4a57-b2d8-8b75613a9f1c name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 14:12:56 no-preload-150469 crio[871]: time="2025-11-02T14:12:56.718076869Z" level=info msg="Started container" PID=2570 containerID=9b0e36613fe705e73b2fc48b60b2667f3b3b1829e9821370ff0b0af0696c4725 description=default/busybox/busybox id=4df97c15-82d6-4a57-b2d8-8b75613a9f1c name=/runtime.v1.RuntimeService/StartContainer sandboxID=e526f78955f35b72aa234977e072a9d98e9f4bc0b516a1a80cbbd0cb38b8dbca
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	9b0e36613fe70       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   8 seconds ago       Running             busybox                   0                   e526f78955f35       busybox                                     default
	d384a5e3ad27e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      14 seconds ago      Running             coredns                   0                   f77c2a1ec42f2       coredns-66bc5c9577-wkgrq                    kube-system
	933fbd1f3987b       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                      14 seconds ago      Running             storage-provisioner       0                   5ce2d4cba6618       storage-provisioner                         kube-system
	36a2275b5234a       docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1    25 seconds ago      Running             kindnet-cni               0                   2f1463be4fcd1       kindnet-vm84g                               kube-system
	6fe7a707b804b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      28 seconds ago      Running             kube-proxy                0                   382291bc21d3d       kube-proxy-qg9np                            kube-system
	f7b036136a3b3       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      40 seconds ago      Running             kube-scheduler            0                   2f56de1a78453       kube-scheduler-no-preload-150469            kube-system
	f57118487dd66       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      40 seconds ago      Running             etcd                      0                   a32b4d46a440e       etcd-no-preload-150469                      kube-system
	8d9eff6ef3018       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      40 seconds ago      Running             kube-apiserver            0                   386b52fcd8826       kube-apiserver-no-preload-150469            kube-system
	bd48ad052b1f5       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      40 seconds ago      Running             kube-controller-manager   0                   ba698ffa7a823       kube-controller-manager-no-preload-150469   kube-system
	
	
	==> coredns [d384a5e3ad27ee78ea85b30e407f8e38c6d3a03814c6427593e0a811e9038457] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49086 - 2618 "HINFO IN 1923405439025258798.5652003459230476875. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013075468s
	
	
	==> describe nodes <==
	Name:               no-preload-150469
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-150469
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a
	                    minikube.k8s.io/name=no-preload-150469
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_02T14_12_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 02 Nov 2025 14:12:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-150469
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 02 Nov 2025 14:13:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 02 Nov 2025 14:13:02 +0000   Sun, 02 Nov 2025 14:12:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 02 Nov 2025 14:13:02 +0000   Sun, 02 Nov 2025 14:12:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 02 Nov 2025 14:13:02 +0000   Sun, 02 Nov 2025 14:12:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 02 Nov 2025 14:13:02 +0000   Sun, 02 Nov 2025 14:12:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-150469
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                132ded7b-9d34-4b24-9227-0ca0ca7ef647
	  Boot ID:                    4c303cf4-c0b6-4c0d-aa1a-d7509feb15e7
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-wkgrq                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-no-preload-150469                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         36s
	  kube-system                 kindnet-vm84g                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-no-preload-150469             250m (12%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-no-preload-150469    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-qg9np                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-no-preload-150469             100m (5%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 28s                kube-proxy       
	  Normal   Starting                 41s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 41s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  41s (x8 over 41s)  kubelet          Node no-preload-150469 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    41s (x8 over 41s)  kubelet          Node no-preload-150469 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     41s (x8 over 41s)  kubelet          Node no-preload-150469 status is now: NodeHasSufficientPID
	  Normal   Starting                 34s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 34s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  34s                kubelet          Node no-preload-150469 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    34s                kubelet          Node no-preload-150469 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     34s                kubelet          Node no-preload-150469 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           30s                node-controller  Node no-preload-150469 event: Registered Node no-preload-150469 in Controller
	  Normal   NodeReady                15s                kubelet          Node no-preload-150469 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 2 13:50] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:51] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:52] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:54] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:55] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:56] overlayfs: idmapped layers are currently not supported
	[  +3.515963] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:57] overlayfs: idmapped layers are currently not supported
	[ +24.836033] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:58] overlayfs: idmapped layers are currently not supported
	[ +23.362553] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:59] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:01] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:02] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:03] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:04] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:06] overlayfs: idmapped layers are currently not supported
	[ +50.469589] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 2 14:07] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:08] overlayfs: idmapped layers are currently not supported
	[ +11.089512] overlayfs: idmapped layers are currently not supported
	[ +33.821233] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:09] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:10] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:11] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [f57118487dd66185608077d2aa4353a830a15cb62f145021238083698e912a25] <==
	{"level":"warn","ts":"2025-11-02T14:12:26.915712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:12:26.955540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:12:26.999167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:12:27.018669Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:12:27.054274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:12:27.073384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:12:27.118889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:12:27.135274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:12:27.149712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:12:27.179602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:12:27.187820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:12:27.203301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:12:27.226746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:12:27.239920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:12:27.266236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:12:27.287215Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:12:27.302307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:12:27.327907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:12:27.339155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:12:27.350806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:12:27.376896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:12:27.427398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:12:27.469694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:12:27.478913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:12:27.597486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55754","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:13:05 up  2:55,  0 user,  load average: 1.67, 2.86, 2.69
	Linux no-preload-150469 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [36a2275b5234aeb0f616778253f6817bd75e1e449e39f1deb6e2feeeb82a0ed6] <==
	I1102 14:12:39.816246       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1102 14:12:39.911527       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1102 14:12:39.911688       1 main.go:148] setting mtu 1500 for CNI 
	I1102 14:12:39.911706       1 main.go:178] kindnetd IP family: "ipv4"
	I1102 14:12:39.911723       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-02T14:12:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1102 14:12:40.116965       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1102 14:12:40.117050       1 controller.go:381] "Waiting for informer caches to sync"
	I1102 14:12:40.212830       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1102 14:12:40.213056       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1102 14:12:40.313128       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1102 14:12:40.313229       1 metrics.go:72] Registering metrics
	I1102 14:12:40.313312       1 controller.go:711] "Syncing nftables rules"
	I1102 14:12:50.117315       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1102 14:12:50.117355       1 main.go:301] handling current node
	I1102 14:13:00.210899       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1102 14:13:00.211036       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8d9eff6ef3018309bad548cde14509b0458ec520d68e95fc87a2a9b8fd0cbdf7] <==
	I1102 14:12:28.666564       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1102 14:12:28.667159       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1102 14:12:28.677223       1 controller.go:667] quota admission added evaluator for: namespaces
	I1102 14:12:28.693410       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 14:12:28.694741       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1102 14:12:28.696394       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1102 14:12:28.719140       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 14:12:28.724310       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1102 14:12:29.348395       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1102 14:12:29.354652       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1102 14:12:29.354675       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1102 14:12:30.190374       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1102 14:12:30.248579       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1102 14:12:30.357417       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1102 14:12:30.365340       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1102 14:12:30.366566       1 controller.go:667] quota admission added evaluator for: endpoints
	I1102 14:12:30.372396       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1102 14:12:30.560974       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1102 14:12:31.587815       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1102 14:12:31.610529       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1102 14:12:31.621697       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1102 14:12:36.067286       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 14:12:36.072589       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 14:12:36.314669       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1102 14:12:36.414870       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [bd48ad052b1f56a6c91f97b332581968669ae60dc3ac2148fda3a1683b799fde] <==
	I1102 14:12:35.562019       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1102 14:12:35.562679       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1102 14:12:35.562764       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1102 14:12:35.564072       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1102 14:12:35.564087       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1102 14:12:35.564556       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1102 14:12:35.564642       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-150469"
	I1102 14:12:35.564683       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1102 14:12:35.564925       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1102 14:12:35.567692       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 14:12:35.567622       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1102 14:12:35.571513       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1102 14:12:35.574698       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1102 14:12:35.582240       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1102 14:12:35.593590       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 14:12:35.607484       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 14:12:35.607590       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1102 14:12:35.607622       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1102 14:12:35.608361       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1102 14:12:35.612103       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1102 14:12:35.614215       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1102 14:12:35.614669       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1102 14:12:35.614737       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1102 14:12:35.631732       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 14:12:50.565850       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [6fe7a707b804b1236ba9a62ca96c6934599224968a911c63165b64d5120734fe] <==
	I1102 14:12:37.125292       1 server_linux.go:53] "Using iptables proxy"
	I1102 14:12:37.224473       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1102 14:12:37.326732       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1102 14:12:37.326771       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1102 14:12:37.326850       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1102 14:12:37.637802       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 14:12:37.637854       1 server_linux.go:132] "Using iptables Proxier"
	I1102 14:12:37.653639       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1102 14:12:37.654004       1 server.go:527] "Version info" version="v1.34.1"
	I1102 14:12:37.654019       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 14:12:37.655350       1 config.go:200] "Starting service config controller"
	I1102 14:12:37.655360       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1102 14:12:37.655375       1 config.go:106] "Starting endpoint slice config controller"
	I1102 14:12:37.655379       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1102 14:12:37.655388       1 config.go:403] "Starting serviceCIDR config controller"
	I1102 14:12:37.655392       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1102 14:12:37.656064       1 config.go:309] "Starting node config controller"
	I1102 14:12:37.656071       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1102 14:12:37.656082       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1102 14:12:37.756217       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1102 14:12:37.756248       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1102 14:12:37.756291       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [f7b036136a3b387e3bc85c5247132af6a763eff49cbee7e24460edbf116a35e4] <==
	E1102 14:12:28.707900       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1102 14:12:28.707854       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1102 14:12:28.708099       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1102 14:12:28.708167       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1102 14:12:28.708230       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1102 14:12:28.708284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1102 14:12:28.708328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1102 14:12:28.708377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1102 14:12:28.708415       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1102 14:12:28.708542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1102 14:12:28.708582       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1102 14:12:28.708649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1102 14:12:28.708696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1102 14:12:28.708022       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1102 14:12:29.535596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1102 14:12:29.566235       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1102 14:12:29.580762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1102 14:12:29.584946       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1102 14:12:29.685714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1102 14:12:29.722633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1102 14:12:29.765384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1102 14:12:29.778749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1102 14:12:29.830906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1102 14:12:30.147041       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1102 14:12:33.263389       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 02 14:12:32 no-preload-150469 kubelet[2030]: I1102 14:12:32.707722    2030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-150469" podStartSLOduration=1.7077044350000001 podStartE2EDuration="1.707704435s" podCreationTimestamp="2025-11-02 14:12:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 14:12:32.691739061 +0000 UTC m=+1.273763528" watchObservedRunningTime="2025-11-02 14:12:32.707704435 +0000 UTC m=+1.289728902"
	Nov 02 14:12:35 no-preload-150469 kubelet[2030]: I1102 14:12:35.608534    2030 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 02 14:12:35 no-preload-150469 kubelet[2030]: I1102 14:12:35.609133    2030 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 02 14:12:36 no-preload-150469 kubelet[2030]: I1102 14:12:36.556366    2030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/31a16d1f-9be1-46bb-a911-452fc3e27389-cni-cfg\") pod \"kindnet-vm84g\" (UID: \"31a16d1f-9be1-46bb-a911-452fc3e27389\") " pod="kube-system/kindnet-vm84g"
	Nov 02 14:12:36 no-preload-150469 kubelet[2030]: I1102 14:12:36.556425    2030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/31a16d1f-9be1-46bb-a911-452fc3e27389-xtables-lock\") pod \"kindnet-vm84g\" (UID: \"31a16d1f-9be1-46bb-a911-452fc3e27389\") " pod="kube-system/kindnet-vm84g"
	Nov 02 14:12:36 no-preload-150469 kubelet[2030]: I1102 14:12:36.556444    2030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/31a16d1f-9be1-46bb-a911-452fc3e27389-lib-modules\") pod \"kindnet-vm84g\" (UID: \"31a16d1f-9be1-46bb-a911-452fc3e27389\") " pod="kube-system/kindnet-vm84g"
	Nov 02 14:12:36 no-preload-150469 kubelet[2030]: I1102 14:12:36.556462    2030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/28814102-d017-4d78-8904-c21855b52264-lib-modules\") pod \"kube-proxy-qg9np\" (UID: \"28814102-d017-4d78-8904-c21855b52264\") " pod="kube-system/kube-proxy-qg9np"
	Nov 02 14:12:36 no-preload-150469 kubelet[2030]: I1102 14:12:36.556578    2030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctlxw\" (UniqueName: \"kubernetes.io/projected/31a16d1f-9be1-46bb-a911-452fc3e27389-kube-api-access-ctlxw\") pod \"kindnet-vm84g\" (UID: \"31a16d1f-9be1-46bb-a911-452fc3e27389\") " pod="kube-system/kindnet-vm84g"
	Nov 02 14:12:36 no-preload-150469 kubelet[2030]: I1102 14:12:36.556600    2030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/28814102-d017-4d78-8904-c21855b52264-kube-proxy\") pod \"kube-proxy-qg9np\" (UID: \"28814102-d017-4d78-8904-c21855b52264\") " pod="kube-system/kube-proxy-qg9np"
	Nov 02 14:12:36 no-preload-150469 kubelet[2030]: I1102 14:12:36.556618    2030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/28814102-d017-4d78-8904-c21855b52264-xtables-lock\") pod \"kube-proxy-qg9np\" (UID: \"28814102-d017-4d78-8904-c21855b52264\") " pod="kube-system/kube-proxy-qg9np"
	Nov 02 14:12:36 no-preload-150469 kubelet[2030]: I1102 14:12:36.556697    2030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4l92\" (UniqueName: \"kubernetes.io/projected/28814102-d017-4d78-8904-c21855b52264-kube-api-access-l4l92\") pod \"kube-proxy-qg9np\" (UID: \"28814102-d017-4d78-8904-c21855b52264\") " pod="kube-system/kube-proxy-qg9np"
	Nov 02 14:12:36 no-preload-150469 kubelet[2030]: I1102 14:12:36.678319    2030 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 02 14:12:36 no-preload-150469 kubelet[2030]: W1102 14:12:36.850074    2030 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/aa4ae44e602159b250d81a350b59577c08056d4e598bda926fd0010cba92fa48/crio-2f1463be4fcd1c00685c9a503003b42df62306ef710feebbf9a760309907979f WatchSource:0}: Error finding container 2f1463be4fcd1c00685c9a503003b42df62306ef710feebbf9a760309907979f: Status 404 returned error can't find the container with id 2f1463be4fcd1c00685c9a503003b42df62306ef710feebbf9a760309907979f
	Nov 02 14:12:40 no-preload-150469 kubelet[2030]: I1102 14:12:40.605158    2030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qg9np" podStartSLOduration=4.605140991 podStartE2EDuration="4.605140991s" podCreationTimestamp="2025-11-02 14:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 14:12:37.647081585 +0000 UTC m=+6.229106060" watchObservedRunningTime="2025-11-02 14:12:40.605140991 +0000 UTC m=+9.187165466"
	Nov 02 14:12:40 no-preload-150469 kubelet[2030]: I1102 14:12:40.667093    2030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-vm84g" podStartSLOduration=1.804776836 podStartE2EDuration="4.667074206s" podCreationTimestamp="2025-11-02 14:12:36 +0000 UTC" firstStartedPulling="2025-11-02 14:12:36.874163445 +0000 UTC m=+5.456187911" lastFinishedPulling="2025-11-02 14:12:39.736460806 +0000 UTC m=+8.318485281" observedRunningTime="2025-11-02 14:12:40.650515604 +0000 UTC m=+9.232540071" watchObservedRunningTime="2025-11-02 14:12:40.667074206 +0000 UTC m=+9.249098673"
	Nov 02 14:12:50 no-preload-150469 kubelet[2030]: I1102 14:12:50.341235    2030 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 02 14:12:50 no-preload-150469 kubelet[2030]: I1102 14:12:50.573781    2030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90029150-f8ff-484e-a449-fa19206ab6b2-config-volume\") pod \"coredns-66bc5c9577-wkgrq\" (UID: \"90029150-f8ff-484e-a449-fa19206ab6b2\") " pod="kube-system/coredns-66bc5c9577-wkgrq"
	Nov 02 14:12:50 no-preload-150469 kubelet[2030]: I1102 14:12:50.573842    2030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blj9v\" (UniqueName: \"kubernetes.io/projected/90029150-f8ff-484e-a449-fa19206ab6b2-kube-api-access-blj9v\") pod \"coredns-66bc5c9577-wkgrq\" (UID: \"90029150-f8ff-484e-a449-fa19206ab6b2\") " pod="kube-system/coredns-66bc5c9577-wkgrq"
	Nov 02 14:12:50 no-preload-150469 kubelet[2030]: I1102 14:12:50.573871    2030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/bb34ac47-56c9-416b-944a-90dc162bf553-tmp\") pod \"storage-provisioner\" (UID: \"bb34ac47-56c9-416b-944a-90dc162bf553\") " pod="kube-system/storage-provisioner"
	Nov 02 14:12:50 no-preload-150469 kubelet[2030]: I1102 14:12:50.573890    2030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv4zr\" (UniqueName: \"kubernetes.io/projected/bb34ac47-56c9-416b-944a-90dc162bf553-kube-api-access-pv4zr\") pod \"storage-provisioner\" (UID: \"bb34ac47-56c9-416b-944a-90dc162bf553\") " pod="kube-system/storage-provisioner"
	Nov 02 14:12:51 no-preload-150469 kubelet[2030]: W1102 14:12:51.029072    2030 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/aa4ae44e602159b250d81a350b59577c08056d4e598bda926fd0010cba92fa48/crio-f77c2a1ec42f2ee223e703787554fe9393682fdbbfd6a23850d70e4491622847 WatchSource:0}: Error finding container f77c2a1ec42f2ee223e703787554fe9393682fdbbfd6a23850d70e4491622847: Status 404 returned error can't find the container with id f77c2a1ec42f2ee223e703787554fe9393682fdbbfd6a23850d70e4491622847
	Nov 02 14:12:51 no-preload-150469 kubelet[2030]: I1102 14:12:51.696942    2030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-wkgrq" podStartSLOduration=15.69692315 podStartE2EDuration="15.69692315s" podCreationTimestamp="2025-11-02 14:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 14:12:51.674999629 +0000 UTC m=+20.257024104" watchObservedRunningTime="2025-11-02 14:12:51.69692315 +0000 UTC m=+20.278947641"
	Nov 02 14:12:51 no-preload-150469 kubelet[2030]: I1102 14:12:51.718003    2030 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.717985388 podStartE2EDuration="13.717985388s" podCreationTimestamp="2025-11-02 14:12:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 14:12:51.699594958 +0000 UTC m=+20.281619613" watchObservedRunningTime="2025-11-02 14:12:51.717985388 +0000 UTC m=+20.300009855"
	Nov 02 14:12:54 no-preload-150469 kubelet[2030]: I1102 14:12:54.104734    2030 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m9dz\" (UniqueName: \"kubernetes.io/projected/e212db07-4bb6-4dba-8d5f-2fd867c01398-kube-api-access-8m9dz\") pod \"busybox\" (UID: \"e212db07-4bb6-4dba-8d5f-2fd867c01398\") " pod="default/busybox"
	Nov 02 14:12:54 no-preload-150469 kubelet[2030]: W1102 14:12:54.393094    2030 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/aa4ae44e602159b250d81a350b59577c08056d4e598bda926fd0010cba92fa48/crio-e526f78955f35b72aa234977e072a9d98e9f4bc0b516a1a80cbbd0cb38b8dbca WatchSource:0}: Error finding container e526f78955f35b72aa234977e072a9d98e9f4bc0b516a1a80cbbd0cb38b8dbca: Status 404 returned error can't find the container with id e526f78955f35b72aa234977e072a9d98e9f4bc0b516a1a80cbbd0cb38b8dbca
	
	
	==> storage-provisioner [933fbd1f3987b7a8d0b64f8987d63855c7cce307ca0741ac49cabedc7c2e4701] <==
	I1102 14:12:51.099229       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1102 14:12:51.118763       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1102 14:12:51.118905       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1102 14:12:51.121860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:12:51.148383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1102 14:12:51.148550       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1102 14:12:51.148736       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-150469_6f726ed9-2eb5-4d05-b905-b99d93c3a59a!
	I1102 14:12:51.148843       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8294f6b4-bba2-4f06-8d40-727928497485", APIVersion:"v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-150469_6f726ed9-2eb5-4d05-b905-b99d93c3a59a became leader
	W1102 14:12:51.155977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:12:51.160257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1102 14:12:51.249568       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-150469_6f726ed9-2eb5-4d05-b905-b99d93c3a59a!
	W1102 14:12:53.163050       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:12:53.167864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:12:55.171725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:12:55.180023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:12:57.183762       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:12:57.191070       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:12:59.196182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:12:59.200312       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:13:01.203967       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:13:01.208733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:13:03.212263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:13:03.216341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:13:05.219392       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:13:05.223801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-150469 -n no-preload-150469
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-150469 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-150469 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p no-preload-150469 --alsologtostderr -v=1: exit status 80 (2.010804156s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-150469 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 14:14:25.561337  488689 out.go:360] Setting OutFile to fd 1 ...
	I1102 14:14:25.561581  488689 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:14:25.561611  488689 out.go:374] Setting ErrFile to fd 2...
	I1102 14:14:25.561707  488689 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:14:25.562051  488689 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 14:14:25.562394  488689 out.go:368] Setting JSON to false
	I1102 14:14:25.562441  488689 mustload.go:66] Loading cluster: no-preload-150469
	I1102 14:14:25.563004  488689 config.go:182] Loaded profile config "no-preload-150469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:14:25.563562  488689 cli_runner.go:164] Run: docker container inspect no-preload-150469 --format={{.State.Status}}
	I1102 14:14:25.581259  488689 host.go:66] Checking if "no-preload-150469" exists ...
	I1102 14:14:25.581691  488689 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:14:25.641777  488689 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-02 14:14:25.626917314 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:14:25.642841  488689 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-150469 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1102 14:14:25.647585  488689 out.go:179] * Pausing node no-preload-150469 ... 
	I1102 14:14:25.651506  488689 host.go:66] Checking if "no-preload-150469" exists ...
	I1102 14:14:25.651877  488689 ssh_runner.go:195] Run: systemctl --version
	I1102 14:14:25.651930  488689 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-150469
	I1102 14:14:25.679140  488689 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/no-preload-150469/id_rsa Username:docker}
	I1102 14:14:25.785798  488689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 14:14:25.799869  488689 pause.go:52] kubelet running: true
	I1102 14:14:25.799951  488689 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1102 14:14:26.066332  488689 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1102 14:14:26.066419  488689 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1102 14:14:26.138468  488689 cri.go:89] found id: "bb15dea2065f6d7a3be2daadead55036243cbc62d492b1a23b588e6a235bebd0"
	I1102 14:14:26.138541  488689 cri.go:89] found id: "5c16fe809384bbef876c2382a1b0a8984689ea91e90b0f11e1c3d5d2e31b593e"
	I1102 14:14:26.138561  488689 cri.go:89] found id: "65de798ea26c14d06e8e1ca4be95b06f036a986330e5ac827e686e19efdb4346"
	I1102 14:14:26.138583  488689 cri.go:89] found id: "8427b59e04cb889ecd2b15bba53ef56dd6e97a4b0e3a181a69cb0987e6740e29"
	I1102 14:14:26.138649  488689 cri.go:89] found id: "7f54f601eb74c81b089dd8333c2ac1ee002c336d383d6ca8f01b893371d53820"
	I1102 14:14:26.138663  488689 cri.go:89] found id: "0d290268ce1ba1c435beb6a5c872eb4214b0dab49611f26a980150c8cf765731"
	I1102 14:14:26.138668  488689 cri.go:89] found id: "ae39d005c17d3eece4e0835d8098b6b121095716785eb6ec522a5afe4f89a68c"
	I1102 14:14:26.138672  488689 cri.go:89] found id: "a519dcf9b13e8f0169f57e526ea9548babc82276dc427bc14eda821e798d8cc0"
	I1102 14:14:26.138675  488689 cri.go:89] found id: "78689f8cb995ba031b8e14be6ecf0557f861d2852066ab8bb9395ec9c1275bcc"
	I1102 14:14:26.138681  488689 cri.go:89] found id: "39bd3315676431d20c326f0fa08a65e0c6fe873bc142b56bb40acbe91691b013"
	I1102 14:14:26.138695  488689 cri.go:89] found id: "c1a14fc8d34ef2b0318dcfde9cb1a935bc5bd449b2ddc86097fda87d37278646"
	I1102 14:14:26.138705  488689 cri.go:89] found id: ""
	I1102 14:14:26.138754  488689 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 14:14:26.158883  488689 retry.go:31] will retry after 265.765223ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T14:14:26Z" level=error msg="open /run/runc: no such file or directory"
	I1102 14:14:26.425468  488689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 14:14:26.438752  488689 pause.go:52] kubelet running: false
	I1102 14:14:26.438830  488689 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1102 14:14:26.614852  488689 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1102 14:14:26.614931  488689 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1102 14:14:26.693368  488689 cri.go:89] found id: "bb15dea2065f6d7a3be2daadead55036243cbc62d492b1a23b588e6a235bebd0"
	I1102 14:14:26.693437  488689 cri.go:89] found id: "5c16fe809384bbef876c2382a1b0a8984689ea91e90b0f11e1c3d5d2e31b593e"
	I1102 14:14:26.693457  488689 cri.go:89] found id: "65de798ea26c14d06e8e1ca4be95b06f036a986330e5ac827e686e19efdb4346"
	I1102 14:14:26.693480  488689 cri.go:89] found id: "8427b59e04cb889ecd2b15bba53ef56dd6e97a4b0e3a181a69cb0987e6740e29"
	I1102 14:14:26.693516  488689 cri.go:89] found id: "7f54f601eb74c81b089dd8333c2ac1ee002c336d383d6ca8f01b893371d53820"
	I1102 14:14:26.693540  488689 cri.go:89] found id: "0d290268ce1ba1c435beb6a5c872eb4214b0dab49611f26a980150c8cf765731"
	I1102 14:14:26.693561  488689 cri.go:89] found id: "ae39d005c17d3eece4e0835d8098b6b121095716785eb6ec522a5afe4f89a68c"
	I1102 14:14:26.693581  488689 cri.go:89] found id: "a519dcf9b13e8f0169f57e526ea9548babc82276dc427bc14eda821e798d8cc0"
	I1102 14:14:26.693600  488689 cri.go:89] found id: "78689f8cb995ba031b8e14be6ecf0557f861d2852066ab8bb9395ec9c1275bcc"
	I1102 14:14:26.693629  488689 cri.go:89] found id: "39bd3315676431d20c326f0fa08a65e0c6fe873bc142b56bb40acbe91691b013"
	I1102 14:14:26.693654  488689 cri.go:89] found id: "c1a14fc8d34ef2b0318dcfde9cb1a935bc5bd449b2ddc86097fda87d37278646"
	I1102 14:14:26.693674  488689 cri.go:89] found id: ""
	I1102 14:14:26.693756  488689 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 14:14:26.705211  488689 retry.go:31] will retry after 497.076582ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T14:14:26Z" level=error msg="open /run/runc: no such file or directory"
	I1102 14:14:27.202968  488689 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 14:14:27.216357  488689 pause.go:52] kubelet running: false
	I1102 14:14:27.216423  488689 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1102 14:14:27.388713  488689 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1102 14:14:27.388831  488689 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1102 14:14:27.469806  488689 cri.go:89] found id: "bb15dea2065f6d7a3be2daadead55036243cbc62d492b1a23b588e6a235bebd0"
	I1102 14:14:27.469832  488689 cri.go:89] found id: "5c16fe809384bbef876c2382a1b0a8984689ea91e90b0f11e1c3d5d2e31b593e"
	I1102 14:14:27.469837  488689 cri.go:89] found id: "65de798ea26c14d06e8e1ca4be95b06f036a986330e5ac827e686e19efdb4346"
	I1102 14:14:27.469842  488689 cri.go:89] found id: "8427b59e04cb889ecd2b15bba53ef56dd6e97a4b0e3a181a69cb0987e6740e29"
	I1102 14:14:27.469845  488689 cri.go:89] found id: "7f54f601eb74c81b089dd8333c2ac1ee002c336d383d6ca8f01b893371d53820"
	I1102 14:14:27.469848  488689 cri.go:89] found id: "0d290268ce1ba1c435beb6a5c872eb4214b0dab49611f26a980150c8cf765731"
	I1102 14:14:27.469851  488689 cri.go:89] found id: "ae39d005c17d3eece4e0835d8098b6b121095716785eb6ec522a5afe4f89a68c"
	I1102 14:14:27.469854  488689 cri.go:89] found id: "a519dcf9b13e8f0169f57e526ea9548babc82276dc427bc14eda821e798d8cc0"
	I1102 14:14:27.469857  488689 cri.go:89] found id: "78689f8cb995ba031b8e14be6ecf0557f861d2852066ab8bb9395ec9c1275bcc"
	I1102 14:14:27.469864  488689 cri.go:89] found id: "39bd3315676431d20c326f0fa08a65e0c6fe873bc142b56bb40acbe91691b013"
	I1102 14:14:27.469874  488689 cri.go:89] found id: "c1a14fc8d34ef2b0318dcfde9cb1a935bc5bd449b2ddc86097fda87d37278646"
	I1102 14:14:27.469887  488689 cri.go:89] found id: ""
	I1102 14:14:27.469961  488689 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 14:14:27.486285  488689 out.go:203] 
	W1102 14:14:27.489414  488689 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T14:14:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T14:14:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 14:14:27.489448  488689 out.go:285] * 
	* 
	W1102 14:14:27.496783  488689 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 14:14:27.499819  488689 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p no-preload-150469 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-150469
helpers_test.go:243: (dbg) docker inspect no-preload-150469:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "aa4ae44e602159b250d81a350b59577c08056d4e598bda926fd0010cba92fa48",
	        "Created": "2025-11-02T14:11:51.659937726Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 483385,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-02T14:13:19.26401586Z",
	            "FinishedAt": "2025-11-02T14:13:18.275064561Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/aa4ae44e602159b250d81a350b59577c08056d4e598bda926fd0010cba92fa48/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/aa4ae44e602159b250d81a350b59577c08056d4e598bda926fd0010cba92fa48/hostname",
	        "HostsPath": "/var/lib/docker/containers/aa4ae44e602159b250d81a350b59577c08056d4e598bda926fd0010cba92fa48/hosts",
	        "LogPath": "/var/lib/docker/containers/aa4ae44e602159b250d81a350b59577c08056d4e598bda926fd0010cba92fa48/aa4ae44e602159b250d81a350b59577c08056d4e598bda926fd0010cba92fa48-json.log",
	        "Name": "/no-preload-150469",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-150469:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-150469",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "aa4ae44e602159b250d81a350b59577c08056d4e598bda926fd0010cba92fa48",
	                "LowerDir": "/var/lib/docker/overlay2/8a6aaf28eb401f956308bc06ae686510e116c66e0d46b46263a0d8a79fbe08f8-init/diff:/var/lib/docker/overlay2/53058afe6acd7391f639f551e07f446f6a39a991b699aea18507cb21a1652e97/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8a6aaf28eb401f956308bc06ae686510e116c66e0d46b46263a0d8a79fbe08f8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8a6aaf28eb401f956308bc06ae686510e116c66e0d46b46263a0d8a79fbe08f8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8a6aaf28eb401f956308bc06ae686510e116c66e0d46b46263a0d8a79fbe08f8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-150469",
	                "Source": "/var/lib/docker/volumes/no-preload-150469/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-150469",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-150469",
	                "name.minikube.sigs.k8s.io": "no-preload-150469",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a96558b0c6411858feeba32ca4202ceab558fb3fadb76780a30510cdcfbb7a37",
	            "SandboxKey": "/var/run/docker/netns/a96558b0c641",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-150469": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:07:3d:1d:b8:4d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "04b125ad348b31edf412b7cd44a1ba32814c5e6b6c1a080d912d4d879cabcf90",
	                    "EndpointID": "9fd1467ede6a7380b6f34bda1444a1118552a98f3fadedfe588c2617362660bd",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-150469",
	                        "aa4ae44e6021"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-150469 -n no-preload-150469
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-150469 -n no-preload-150469: exit status 2 (368.276564ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-150469 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-150469 logs -n 25: (1.335125098s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ start   │ -p cert-expiration-114321 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-114321   │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │ 02 Nov 25 14:08 UTC │
	│ delete  │ -p force-systemd-env-263133                                                                                                                                                                                                                   │ force-systemd-env-263133 │ jenkins │ v1.37.0 │ 02 Nov 25 14:08 UTC │ 02 Nov 25 14:08 UTC │
	│ start   │ -p cert-options-935084 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-935084      │ jenkins │ v1.37.0 │ 02 Nov 25 14:08 UTC │ 02 Nov 25 14:09 UTC │
	│ ssh     │ cert-options-935084 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-935084      │ jenkins │ v1.37.0 │ 02 Nov 25 14:09 UTC │ 02 Nov 25 14:09 UTC │
	│ ssh     │ -p cert-options-935084 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-935084      │ jenkins │ v1.37.0 │ 02 Nov 25 14:09 UTC │ 02 Nov 25 14:09 UTC │
	│ delete  │ -p cert-options-935084                                                                                                                                                                                                                        │ cert-options-935084      │ jenkins │ v1.37.0 │ 02 Nov 25 14:09 UTC │ 02 Nov 25 14:09 UTC │
	│ start   │ -p old-k8s-version-873713 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:09 UTC │ 02 Nov 25 14:10 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-873713 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:10 UTC │                     │
	│ stop    │ -p old-k8s-version-873713 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:10 UTC │ 02 Nov 25 14:10 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-873713 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:10 UTC │ 02 Nov 25 14:10 UTC │
	│ start   │ -p old-k8s-version-873713 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:10 UTC │ 02 Nov 25 14:11 UTC │
	│ start   │ -p cert-expiration-114321 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-114321   │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │ 02 Nov 25 14:13 UTC │
	│ image   │ old-k8s-version-873713 image list --format=json                                                                                                                                                                                               │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │ 02 Nov 25 14:11 UTC │
	│ pause   │ -p old-k8s-version-873713 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │                     │
	│ delete  │ -p old-k8s-version-873713                                                                                                                                                                                                                     │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │ 02 Nov 25 14:11 UTC │
	│ delete  │ -p old-k8s-version-873713                                                                                                                                                                                                                     │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │ 02 Nov 25 14:11 UTC │
	│ start   │ -p no-preload-150469 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-150469        │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │ 02 Nov 25 14:12 UTC │
	│ addons  │ enable metrics-server -p no-preload-150469 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-150469        │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │                     │
	│ stop    │ -p no-preload-150469 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-150469        │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │ 02 Nov 25 14:13 UTC │
	│ addons  │ enable dashboard -p no-preload-150469 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-150469        │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │ 02 Nov 25 14:13 UTC │
	│ start   │ -p no-preload-150469 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-150469        │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │ 02 Nov 25 14:14 UTC │
	│ delete  │ -p cert-expiration-114321                                                                                                                                                                                                                     │ cert-expiration-114321   │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │ 02 Nov 25 14:13 UTC │
	│ start   │ -p embed-certs-955646 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-955646       │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │                     │
	│ image   │ no-preload-150469 image list --format=json                                                                                                                                                                                                    │ no-preload-150469        │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │ 02 Nov 25 14:14 UTC │
	│ pause   │ -p no-preload-150469 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-150469        │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 14:13:32
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 14:13:32.802295  485527 out.go:360] Setting OutFile to fd 1 ...
	I1102 14:13:32.802402  485527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:13:32.802457  485527 out.go:374] Setting ErrFile to fd 2...
	I1102 14:13:32.802464  485527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:13:32.802724  485527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 14:13:32.803133  485527 out.go:368] Setting JSON to false
	I1102 14:13:32.804093  485527 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10565,"bootTime":1762082248,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1102 14:13:32.804153  485527 start.go:143] virtualization:  
	I1102 14:13:32.807816  485527 out.go:179] * [embed-certs-955646] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1102 14:13:32.812492  485527 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 14:13:32.812570  485527 notify.go:221] Checking for updates...
	I1102 14:13:32.819110  485527 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 14:13:32.822290  485527 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-293314/kubeconfig
	I1102 14:13:32.825432  485527 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-293314/.minikube
	I1102 14:13:32.828564  485527 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1102 14:13:32.831675  485527 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 14:13:32.835239  485527 config.go:182] Loaded profile config "no-preload-150469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:13:32.835338  485527 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 14:13:32.895226  485527 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1102 14:13:32.895374  485527 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:13:33.009592  485527 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-02 14:13:32.992790107 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:13:33.009719  485527 docker.go:319] overlay module found
	I1102 14:13:33.013023  485527 out.go:179] * Using the docker driver based on user configuration
	I1102 14:13:33.015900  485527 start.go:309] selected driver: docker
	I1102 14:13:33.015920  485527 start.go:930] validating driver "docker" against <nil>
	I1102 14:13:33.015934  485527 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 14:13:33.016649  485527 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:13:33.134911  485527 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-02 14:13:33.123830709 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:13:33.135067  485527 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1102 14:13:33.135332  485527 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 14:13:33.138367  485527 out.go:179] * Using Docker driver with root privileges
	I1102 14:13:33.141236  485527 cni.go:84] Creating CNI manager for ""
	I1102 14:13:33.141314  485527 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 14:13:33.141329  485527 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1102 14:13:33.141417  485527 start.go:353] cluster config:
	{Name:embed-certs-955646 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-955646 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1102 14:13:33.144565  485527 out.go:179] * Starting "embed-certs-955646" primary control-plane node in "embed-certs-955646" cluster
	I1102 14:13:33.147386  485527 cache.go:124] Beginning downloading kic base image for docker with crio
	I1102 14:13:33.150432  485527 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1102 14:13:33.153359  485527 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 14:13:33.153418  485527 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1102 14:13:33.153437  485527 cache.go:59] Caching tarball of preloaded images
	I1102 14:13:33.153524  485527 preload.go:233] Found /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1102 14:13:33.153538  485527 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1102 14:13:33.153652  485527 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/config.json ...
	I1102 14:13:33.153676  485527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/config.json: {Name:mka4dc94076eff42daaba5da6a6a891c3a2e48ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:13:33.153830  485527 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1102 14:13:33.180403  485527 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1102 14:13:33.180429  485527 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1102 14:13:33.180442  485527 cache.go:233] Successfully downloaded all kic artifacts
	I1102 14:13:33.180465  485527 start.go:360] acquireMachinesLock for embed-certs-955646: {Name:mke26bb2e28d5dc8d577d151206240e9d92b1828 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 14:13:33.180580  485527 start.go:364] duration metric: took 94.598µs to acquireMachinesLock for "embed-certs-955646"
	I1102 14:13:33.180611  485527 start.go:93] Provisioning new machine with config: &{Name:embed-certs-955646 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-955646 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 14:13:33.180689  485527 start.go:125] createHost starting for "" (driver="docker")
	I1102 14:13:29.399527  483255 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1102 14:13:29.399557  483255 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1102 14:13:29.399630  483255 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-150469
	I1102 14:13:29.407598  483255 addons.go:239] Setting addon default-storageclass=true in "no-preload-150469"
	W1102 14:13:29.407620  483255 addons.go:248] addon default-storageclass should already be in state true
	I1102 14:13:29.407644  483255 host.go:66] Checking if "no-preload-150469" exists ...
	I1102 14:13:29.408056  483255 cli_runner.go:164] Run: docker container inspect no-preload-150469 --format={{.State.Status}}
	I1102 14:13:29.480216  483255 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/no-preload-150469/id_rsa Username:docker}
	I1102 14:13:29.492656  483255 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/no-preload-150469/id_rsa Username:docker}
	I1102 14:13:29.502368  483255 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1102 14:13:29.502389  483255 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1102 14:13:29.502453  483255 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-150469
	I1102 14:13:29.577540  483255 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/no-preload-150469/id_rsa Username:docker}
	I1102 14:13:29.774659  483255 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 14:13:29.811465  483255 node_ready.go:35] waiting up to 6m0s for node "no-preload-150469" to be "Ready" ...
	I1102 14:13:29.842862  483255 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1102 14:13:29.842936  483255 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1102 14:13:29.915412  483255 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 14:13:29.942323  483255 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1102 14:13:29.942352  483255 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1102 14:13:30.004196  483255 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1102 14:13:30.075519  483255 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1102 14:13:30.075542  483255 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1102 14:13:30.188726  483255 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1102 14:13:30.188753  483255 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1102 14:13:30.323456  483255 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1102 14:13:30.323532  483255 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1102 14:13:30.404845  483255 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1102 14:13:30.404866  483255 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1102 14:13:30.474506  483255 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1102 14:13:30.474527  483255 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1102 14:13:30.541461  483255 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1102 14:13:30.541482  483255 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1102 14:13:30.584446  483255 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 14:13:30.584517  483255 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1102 14:13:30.614357  483255 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 14:13:33.184018  485527 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1102 14:13:33.184242  485527 start.go:159] libmachine.API.Create for "embed-certs-955646" (driver="docker")
	I1102 14:13:33.184274  485527 client.go:173] LocalClient.Create starting
	I1102 14:13:33.184353  485527 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem
	I1102 14:13:33.184390  485527 main.go:143] libmachine: Decoding PEM data...
	I1102 14:13:33.184407  485527 main.go:143] libmachine: Parsing certificate...
	I1102 14:13:33.184463  485527 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem
	I1102 14:13:33.184487  485527 main.go:143] libmachine: Decoding PEM data...
	I1102 14:13:33.184500  485527 main.go:143] libmachine: Parsing certificate...
	I1102 14:13:33.184861  485527 cli_runner.go:164] Run: docker network inspect embed-certs-955646 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1102 14:13:33.219852  485527 cli_runner.go:211] docker network inspect embed-certs-955646 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1102 14:13:33.219942  485527 network_create.go:284] running [docker network inspect embed-certs-955646] to gather additional debugging logs...
	I1102 14:13:33.219959  485527 cli_runner.go:164] Run: docker network inspect embed-certs-955646
	W1102 14:13:33.248174  485527 cli_runner.go:211] docker network inspect embed-certs-955646 returned with exit code 1
	I1102 14:13:33.248214  485527 network_create.go:287] error running [docker network inspect embed-certs-955646]: docker network inspect embed-certs-955646: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-955646 not found
	I1102 14:13:33.248229  485527 network_create.go:289] output of [docker network inspect embed-certs-955646]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-955646 not found
	
	** /stderr **
	I1102 14:13:33.248326  485527 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 14:13:33.273741  485527 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ddf319108ac9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:66:f7:2d:49:67:ff} reservation:<nil>}
	I1102 14:13:33.274106  485527 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-30b945568040 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:b6:b2:b0:cb:49:d7} reservation:<nil>}
	I1102 14:13:33.274326  485527 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d23a3a2e266d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fa:42:95:8e:ae:52} reservation:<nil>}
	I1102 14:13:33.274600  485527 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-04b125ad348b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:22:2e:b7:29:19:5f} reservation:<nil>}
	I1102 14:13:33.275120  485527 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a7c050}
	I1102 14:13:33.275143  485527 network_create.go:124] attempt to create docker network embed-certs-955646 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1102 14:13:33.275203  485527 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-955646 embed-certs-955646
	I1102 14:13:33.372821  485527 network_create.go:108] docker network embed-certs-955646 192.168.85.0/24 created
	I1102 14:13:33.372858  485527 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-955646" container
	I1102 14:13:33.372943  485527 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1102 14:13:33.402912  485527 cli_runner.go:164] Run: docker volume create embed-certs-955646 --label name.minikube.sigs.k8s.io=embed-certs-955646 --label created_by.minikube.sigs.k8s.io=true
	I1102 14:13:33.432560  485527 oci.go:103] Successfully created a docker volume embed-certs-955646
	I1102 14:13:33.432646  485527 cli_runner.go:164] Run: docker run --rm --name embed-certs-955646-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-955646 --entrypoint /usr/bin/test -v embed-certs-955646:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1102 14:13:34.104198  485527 oci.go:107] Successfully prepared a docker volume embed-certs-955646
	I1102 14:13:34.104254  485527 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 14:13:34.104274  485527 kic.go:194] Starting extracting preloaded images to volume ...
	I1102 14:13:34.104351  485527 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-955646:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1102 14:13:35.613727  483255 node_ready.go:49] node "no-preload-150469" is "Ready"
	I1102 14:13:35.613753  483255 node_ready.go:38] duration metric: took 5.802193247s for node "no-preload-150469" to be "Ready" ...
	I1102 14:13:35.613767  483255 api_server.go:52] waiting for apiserver process to appear ...
	I1102 14:13:35.613825  483255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 14:13:37.782183  483255 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.866739994s)
	I1102 14:13:37.782242  483255 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.778027539s)
	I1102 14:13:37.980345  483255 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.365907366s)
	I1102 14:13:37.980542  483255 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.366706464s)
	I1102 14:13:37.980564  483255 api_server.go:72] duration metric: took 8.67646007s to wait for apiserver process to appear ...
	I1102 14:13:37.980570  483255 api_server.go:88] waiting for apiserver healthz status ...
	I1102 14:13:37.980588  483255 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1102 14:13:37.986342  483255 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-150469 addons enable metrics-server
	
	I1102 14:13:37.991190  483255 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1102 14:13:37.996184  483255 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1102 14:13:37.996395  483255 addons.go:515] duration metric: took 8.691980645s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1102 14:13:37.999292  483255 api_server.go:141] control plane version: v1.34.1
	I1102 14:13:37.999322  483255 api_server.go:131] duration metric: took 18.745702ms to wait for apiserver health ...
	I1102 14:13:37.999331  483255 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 14:13:38.024155  483255 system_pods.go:59] 8 kube-system pods found
	I1102 14:13:38.024194  483255 system_pods.go:61] "coredns-66bc5c9577-wkgrq" [90029150-f8ff-484e-a449-fa19206ab6b2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 14:13:38.024204  483255 system_pods.go:61] "etcd-no-preload-150469" [a653ec43-d807-4f0b-8111-507615a63de3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 14:13:38.024212  483255 system_pods.go:61] "kindnet-vm84g" [31a16d1f-9be1-46bb-a911-452fc3e27389] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1102 14:13:38.024219  483255 system_pods.go:61] "kube-apiserver-no-preload-150469" [9668b04c-ce5d-4558-b3d1-3ddb2d40c8af] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 14:13:38.024227  483255 system_pods.go:61] "kube-controller-manager-no-preload-150469" [5cf5e00d-7b35-495f-b5f2-6c149ee77125] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 14:13:38.024233  483255 system_pods.go:61] "kube-proxy-qg9np" [28814102-d017-4d78-8904-c21855b52264] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1102 14:13:38.024240  483255 system_pods.go:61] "kube-scheduler-no-preload-150469" [5785a754-7fb9-46fd-865a-2756355ba605] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 14:13:38.024252  483255 system_pods.go:61] "storage-provisioner" [bb34ac47-56c9-416b-944a-90dc162bf553] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 14:13:38.024261  483255 system_pods.go:74] duration metric: took 24.923797ms to wait for pod list to return data ...
	I1102 14:13:38.024270  483255 default_sa.go:34] waiting for default service account to be created ...
	I1102 14:13:38.045081  483255 default_sa.go:45] found service account: "default"
	I1102 14:13:38.045111  483255 default_sa.go:55] duration metric: took 20.834007ms for default service account to be created ...
	I1102 14:13:38.045122  483255 system_pods.go:116] waiting for k8s-apps to be running ...
	I1102 14:13:38.049928  483255 system_pods.go:86] 8 kube-system pods found
	I1102 14:13:38.049980  483255 system_pods.go:89] "coredns-66bc5c9577-wkgrq" [90029150-f8ff-484e-a449-fa19206ab6b2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 14:13:38.049991  483255 system_pods.go:89] "etcd-no-preload-150469" [a653ec43-d807-4f0b-8111-507615a63de3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 14:13:38.050000  483255 system_pods.go:89] "kindnet-vm84g" [31a16d1f-9be1-46bb-a911-452fc3e27389] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1102 14:13:38.050007  483255 system_pods.go:89] "kube-apiserver-no-preload-150469" [9668b04c-ce5d-4558-b3d1-3ddb2d40c8af] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 14:13:38.050017  483255 system_pods.go:89] "kube-controller-manager-no-preload-150469" [5cf5e00d-7b35-495f-b5f2-6c149ee77125] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 14:13:38.050023  483255 system_pods.go:89] "kube-proxy-qg9np" [28814102-d017-4d78-8904-c21855b52264] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1102 14:13:38.050035  483255 system_pods.go:89] "kube-scheduler-no-preload-150469" [5785a754-7fb9-46fd-865a-2756355ba605] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 14:13:38.050042  483255 system_pods.go:89] "storage-provisioner" [bb34ac47-56c9-416b-944a-90dc162bf553] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 14:13:38.050053  483255 system_pods.go:126] duration metric: took 4.925407ms to wait for k8s-apps to be running ...
	I1102 14:13:38.050062  483255 system_svc.go:44] waiting for kubelet service to be running ....
	I1102 14:13:38.050119  483255 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 14:13:38.085874  483255 system_svc.go:56] duration metric: took 35.801748ms WaitForService to wait for kubelet
	I1102 14:13:38.085902  483255 kubeadm.go:587] duration metric: took 8.781795387s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 14:13:38.085919  483255 node_conditions.go:102] verifying NodePressure condition ...
	I1102 14:13:38.092337  483255 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1102 14:13:38.092432  483255 node_conditions.go:123] node cpu capacity is 2
	I1102 14:13:38.092463  483255 node_conditions.go:105] duration metric: took 6.538164ms to run NodePressure ...
	I1102 14:13:38.092516  483255 start.go:242] waiting for startup goroutines ...
	I1102 14:13:38.092551  483255 start.go:247] waiting for cluster config update ...
	I1102 14:13:38.092588  483255 start.go:256] writing updated cluster config ...
	I1102 14:13:38.093259  483255 ssh_runner.go:195] Run: rm -f paused
	I1102 14:13:38.099753  483255 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 14:13:38.107031  483255 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wkgrq" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:13:39.659628  485527 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-955646:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.555236688s)
	I1102 14:13:39.659662  485527 kic.go:203] duration metric: took 5.555384095s to extract preloaded images to volume ...
	W1102 14:13:39.659799  485527 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1102 14:13:39.659916  485527 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1102 14:13:39.771097  485527 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-955646 --name embed-certs-955646 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-955646 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-955646 --network embed-certs-955646 --ip 192.168.85.2 --volume embed-certs-955646:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1102 14:13:40.164386  485527 cli_runner.go:164] Run: docker container inspect embed-certs-955646 --format={{.State.Running}}
	I1102 14:13:40.186820  485527 cli_runner.go:164] Run: docker container inspect embed-certs-955646 --format={{.State.Status}}
	I1102 14:13:40.213678  485527 cli_runner.go:164] Run: docker exec embed-certs-955646 stat /var/lib/dpkg/alternatives/iptables
	I1102 14:13:40.271531  485527 oci.go:144] the created container "embed-certs-955646" has a running status.
	I1102 14:13:40.271559  485527 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21808-293314/.minikube/machines/embed-certs-955646/id_rsa...
	I1102 14:13:40.754016  485527 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21808-293314/.minikube/machines/embed-certs-955646/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1102 14:13:40.776203  485527 cli_runner.go:164] Run: docker container inspect embed-certs-955646 --format={{.State.Status}}
	I1102 14:13:40.798300  485527 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1102 14:13:40.798318  485527 kic_runner.go:114] Args: [docker exec --privileged embed-certs-955646 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1102 14:13:40.866858  485527 cli_runner.go:164] Run: docker container inspect embed-certs-955646 --format={{.State.Status}}
	I1102 14:13:40.892161  485527 machine.go:94] provisionDockerMachine start ...
	I1102 14:13:40.892336  485527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-955646
	I1102 14:13:40.917829  485527 main.go:143] libmachine: Using SSH client type: native
	I1102 14:13:40.918263  485527 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1102 14:13:40.918277  485527 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 14:13:40.918964  485527 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	W1102 14:13:40.114562  483255 pod_ready.go:104] pod "coredns-66bc5c9577-wkgrq" is not "Ready", error: <nil>
	W1102 14:13:42.114662  483255 pod_ready.go:104] pod "coredns-66bc5c9577-wkgrq" is not "Ready", error: <nil>
	I1102 14:13:44.096722  485527 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-955646
	
	I1102 14:13:44.096751  485527 ubuntu.go:182] provisioning hostname "embed-certs-955646"
	I1102 14:13:44.096819  485527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-955646
	I1102 14:13:44.132342  485527 main.go:143] libmachine: Using SSH client type: native
	I1102 14:13:44.132652  485527 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1102 14:13:44.132664  485527 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-955646 && echo "embed-certs-955646" | sudo tee /etc/hostname
	I1102 14:13:44.315196  485527 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-955646
	
	I1102 14:13:44.315395  485527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-955646
	I1102 14:13:44.340523  485527 main.go:143] libmachine: Using SSH client type: native
	I1102 14:13:44.340819  485527 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1102 14:13:44.340841  485527 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-955646' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-955646/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-955646' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 14:13:44.511822  485527 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 14:13:44.511855  485527 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-293314/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-293314/.minikube}
	I1102 14:13:44.511876  485527 ubuntu.go:190] setting up certificates
	I1102 14:13:44.511885  485527 provision.go:84] configureAuth start
	I1102 14:13:44.511966  485527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-955646
	I1102 14:13:44.535429  485527 provision.go:143] copyHostCerts
	I1102 14:13:44.535504  485527 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem, removing ...
	I1102 14:13:44.535519  485527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem
	I1102 14:13:44.535647  485527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem (1082 bytes)
	I1102 14:13:44.535757  485527 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem, removing ...
	I1102 14:13:44.535772  485527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem
	I1102 14:13:44.535805  485527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem (1123 bytes)
	I1102 14:13:44.535864  485527 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem, removing ...
	I1102 14:13:44.535873  485527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem
	I1102 14:13:44.535897  485527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem (1675 bytes)
	I1102 14:13:44.535956  485527 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem org=jenkins.embed-certs-955646 san=[127.0.0.1 192.168.85.2 embed-certs-955646 localhost minikube]
	I1102 14:13:44.880612  485527 provision.go:177] copyRemoteCerts
	I1102 14:13:44.880683  485527 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 14:13:44.880736  485527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-955646
	I1102 14:13:44.899640  485527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/embed-certs-955646/id_rsa Username:docker}
	I1102 14:13:45.081712  485527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1102 14:13:45.156422  485527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1102 14:13:45.184523  485527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1102 14:13:45.212890  485527 provision.go:87] duration metric: took 700.973875ms to configureAuth
	I1102 14:13:45.212924  485527 ubuntu.go:206] setting minikube options for container-runtime
	I1102 14:13:45.213191  485527 config.go:182] Loaded profile config "embed-certs-955646": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:13:45.213319  485527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-955646
	I1102 14:13:45.258902  485527 main.go:143] libmachine: Using SSH client type: native
	I1102 14:13:45.259294  485527 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1102 14:13:45.259317  485527 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1102 14:13:45.669762  485527 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1102 14:13:45.669783  485527 machine.go:97] duration metric: took 4.777602507s to provisionDockerMachine
	I1102 14:13:45.669793  485527 client.go:176] duration metric: took 12.485507762s to LocalClient.Create
	I1102 14:13:45.669808  485527 start.go:167] duration metric: took 12.485567512s to libmachine.API.Create "embed-certs-955646"
	I1102 14:13:45.669815  485527 start.go:293] postStartSetup for "embed-certs-955646" (driver="docker")
	I1102 14:13:45.669825  485527 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1102 14:13:45.669903  485527 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1102 14:13:45.669945  485527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-955646
	I1102 14:13:45.704545  485527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/embed-certs-955646/id_rsa Username:docker}
	I1102 14:13:45.819475  485527 ssh_runner.go:195] Run: cat /etc/os-release
	I1102 14:13:45.823796  485527 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1102 14:13:45.823829  485527 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1102 14:13:45.823848  485527 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-293314/.minikube/addons for local assets ...
	I1102 14:13:45.823905  485527 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-293314/.minikube/files for local assets ...
	I1102 14:13:45.823999  485527 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem -> 2951742.pem in /etc/ssl/certs
	I1102 14:13:45.824106  485527 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1102 14:13:45.838551  485527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem --> /etc/ssl/certs/2951742.pem (1708 bytes)
	I1102 14:13:45.868070  485527 start.go:296] duration metric: took 198.24085ms for postStartSetup
	I1102 14:13:45.868605  485527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-955646
	I1102 14:13:45.894956  485527 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/config.json ...
	I1102 14:13:45.895259  485527 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 14:13:45.895301  485527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-955646
	I1102 14:13:45.916052  485527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/embed-certs-955646/id_rsa Username:docker}
	I1102 14:13:46.027279  485527 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1102 14:13:46.046247  485527 start.go:128] duration metric: took 12.865525601s to createHost
	I1102 14:13:46.046275  485527 start.go:83] releasing machines lock for "embed-certs-955646", held for 12.865680541s
	I1102 14:13:46.046359  485527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-955646
	I1102 14:13:46.081171  485527 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem (1338 bytes)
	W1102 14:13:46.081252  485527 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174_empty.pem, impossibly tiny 0 bytes
	I1102 14:13:46.081262  485527 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem (1679 bytes)
	I1102 14:13:46.081288  485527 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem (1082 bytes)
	I1102 14:13:46.081313  485527 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem (1123 bytes)
	I1102 14:13:46.081340  485527 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem (1675 bytes)
	I1102 14:13:46.081383  485527 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem (1708 bytes)
	I1102 14:13:46.081447  485527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 14:13:46.081499  485527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-955646
	I1102 14:13:46.124257  485527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/embed-certs-955646/id_rsa Username:docker}
	I1102 14:13:46.253678  485527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem --> /usr/share/ca-certificates/295174.pem (1338 bytes)
	I1102 14:13:46.274377  485527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem --> /usr/share/ca-certificates/2951742.pem (1708 bytes)
	I1102 14:13:46.294274  485527 ssh_runner.go:195] Run: openssl version
	I1102 14:13:46.304154  485527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 14:13:46.314072  485527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:13:46.318734  485527 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 13:13 /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:13:46.318795  485527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:13:46.385977  485527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 14:13:46.396087  485527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295174.pem && ln -fs /usr/share/ca-certificates/295174.pem /etc/ssl/certs/295174.pem"
	I1102 14:13:46.408077  485527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295174.pem
	I1102 14:13:46.415166  485527 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 13:20 /usr/share/ca-certificates/295174.pem
	I1102 14:13:46.415292  485527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295174.pem
	I1102 14:13:46.471332  485527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295174.pem /etc/ssl/certs/51391683.0"
	I1102 14:13:46.487316  485527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951742.pem && ln -fs /usr/share/ca-certificates/2951742.pem /etc/ssl/certs/2951742.pem"
	I1102 14:13:46.498240  485527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951742.pem
	I1102 14:13:46.502673  485527 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 13:20 /usr/share/ca-certificates/2951742.pem
	I1102 14:13:46.502819  485527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951742.pem
	I1102 14:13:46.570448  485527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951742.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 14:13:46.585588  485527 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1102 14:13:46.593988  485527 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1102 14:13:46.597850  485527 ssh_runner.go:195] Run: cat /version.json
	I1102 14:13:46.597995  485527 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1102 14:13:46.715759  485527 ssh_runner.go:195] Run: systemctl --version
	I1102 14:13:46.723822  485527 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1102 14:13:46.802908  485527 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1102 14:13:46.811607  485527 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1102 14:13:46.811729  485527 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1102 14:13:46.848873  485527 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1102 14:13:46.848945  485527 start.go:496] detecting cgroup driver to use...
	I1102 14:13:46.848992  485527 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1102 14:13:46.849082  485527 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1102 14:13:46.878775  485527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1102 14:13:46.894133  485527 docker.go:218] disabling cri-docker service (if available) ...
	I1102 14:13:46.894246  485527 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1102 14:13:46.912886  485527 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1102 14:13:46.934345  485527 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1102 14:13:47.137669  485527 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1102 14:13:47.321202  485527 docker.go:234] disabling docker service ...
	I1102 14:13:47.321323  485527 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1102 14:13:47.350521  485527 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1102 14:13:47.373484  485527 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1102 14:13:47.523766  485527 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1102 14:13:47.678543  485527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1102 14:13:47.694291  485527 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1102 14:13:47.709924  485527 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1102 14:13:47.709989  485527 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:13:47.720093  485527 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1102 14:13:47.720159  485527 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:13:47.728942  485527 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:13:47.737693  485527 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:13:47.746684  485527 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1102 14:13:47.755032  485527 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:13:47.763761  485527 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:13:47.777232  485527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:13:47.785926  485527 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1102 14:13:47.794373  485527 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1102 14:13:47.802880  485527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 14:13:47.946488  485527 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1102 14:13:48.161690  485527 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1102 14:13:48.161815  485527 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1102 14:13:48.166431  485527 start.go:564] Will wait 60s for crictl version
	I1102 14:13:48.166550  485527 ssh_runner.go:195] Run: which crictl
	I1102 14:13:48.172927  485527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1102 14:13:48.220760  485527 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1102 14:13:48.220909  485527 ssh_runner.go:195] Run: crio --version
	I1102 14:13:48.261823  485527 ssh_runner.go:195] Run: crio --version
	I1102 14:13:48.314313  485527 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1102 14:13:44.122991  483255 pod_ready.go:104] pod "coredns-66bc5c9577-wkgrq" is not "Ready", error: <nil>
	W1102 14:13:46.130214  483255 pod_ready.go:104] pod "coredns-66bc5c9577-wkgrq" is not "Ready", error: <nil>
	W1102 14:13:48.613483  483255 pod_ready.go:104] pod "coredns-66bc5c9577-wkgrq" is not "Ready", error: <nil>
	I1102 14:13:48.317301  485527 cli_runner.go:164] Run: docker network inspect embed-certs-955646 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 14:13:48.337443  485527 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1102 14:13:48.344354  485527 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 14:13:48.356530  485527 kubeadm.go:884] updating cluster {Name:embed-certs-955646 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-955646 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1102 14:13:48.356655  485527 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 14:13:48.356718  485527 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 14:13:48.399263  485527 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 14:13:48.399290  485527 crio.go:433] Images already preloaded, skipping extraction
	I1102 14:13:48.399350  485527 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 14:13:48.430640  485527 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 14:13:48.430666  485527 cache_images.go:86] Images are preloaded, skipping loading
	I1102 14:13:48.430674  485527 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1102 14:13:48.430827  485527 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-955646 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-955646 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1102 14:13:48.430925  485527 ssh_runner.go:195] Run: crio config
	I1102 14:13:48.519494  485527 cni.go:84] Creating CNI manager for ""
	I1102 14:13:48.519515  485527 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 14:13:48.519532  485527 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1102 14:13:48.519557  485527 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-955646 NodeName:embed-certs-955646 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1102 14:13:48.519692  485527 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-955646"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1102 14:13:48.519766  485527 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1102 14:13:48.530670  485527 binaries.go:44] Found k8s binaries, skipping transfer
	I1102 14:13:48.530740  485527 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1102 14:13:48.539073  485527 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1102 14:13:48.552553  485527 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1102 14:13:48.567884  485527 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1102 14:13:48.585599  485527 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1102 14:13:48.590154  485527 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 14:13:48.600414  485527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 14:13:48.760295  485527 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 14:13:48.781817  485527 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646 for IP: 192.168.85.2
	I1102 14:13:48.781854  485527 certs.go:195] generating shared ca certs ...
	I1102 14:13:48.781888  485527 certs.go:227] acquiring lock for ca certs: {Name:mkead50075949a3cdc798f9c0149a2bc2638cbbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:13:48.782059  485527 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-293314/.minikube/ca.key
	I1102 14:13:48.782131  485527 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.key
	I1102 14:13:48.782146  485527 certs.go:257] generating profile certs ...
	I1102 14:13:48.782231  485527 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/client.key
	I1102 14:13:48.782266  485527 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/client.crt with IP's: []
	I1102 14:13:49.007900  485527 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/client.crt ...
	I1102 14:13:49.007935  485527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/client.crt: {Name:mke0ea81780bcd9a7eb9a3d40c551704279821ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:13:49.008198  485527 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/client.key ...
	I1102 14:13:49.008219  485527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/client.key: {Name:mk067a7e80433a57e6dd2da85af3ab351d6aaca5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:13:49.008376  485527 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/apiserver.key.07905a59
	I1102 14:13:49.008399  485527 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/apiserver.crt.07905a59 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1102 14:13:49.598864  485527 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/apiserver.crt.07905a59 ...
	I1102 14:13:49.598899  485527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/apiserver.crt.07905a59: {Name:mk154950a39c0088eff33710b0924263e0dcd771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:13:49.599071  485527 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/apiserver.key.07905a59 ...
	I1102 14:13:49.599088  485527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/apiserver.key.07905a59: {Name:mk0452e6895b8e71589f0005962ff1e30a2cebcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:13:49.599168  485527 certs.go:382] copying /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/apiserver.crt.07905a59 -> /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/apiserver.crt
	I1102 14:13:49.599270  485527 certs.go:386] copying /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/apiserver.key.07905a59 -> /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/apiserver.key
	I1102 14:13:49.599339  485527 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/proxy-client.key
	I1102 14:13:49.599359  485527 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/proxy-client.crt with IP's: []
	I1102 14:13:50.404316  485527 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/proxy-client.crt ...
	I1102 14:13:50.404350  485527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/proxy-client.crt: {Name:mkc94e7a5d26be4d93809990afbb05cbf5aed186 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:13:50.404552  485527 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/proxy-client.key ...
	I1102 14:13:50.404570  485527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/proxy-client.key: {Name:mkec4ac48faf3b53b73001f2fa818ea0bc0944a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:13:50.404826  485527 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem (1338 bytes)
	W1102 14:13:50.404886  485527 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174_empty.pem, impossibly tiny 0 bytes
	I1102 14:13:50.404904  485527 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem (1679 bytes)
	I1102 14:13:50.404942  485527 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem (1082 bytes)
	I1102 14:13:50.404988  485527 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem (1123 bytes)
	I1102 14:13:50.405019  485527 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem (1675 bytes)
	I1102 14:13:50.405085  485527 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem (1708 bytes)
	I1102 14:13:50.405678  485527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1102 14:13:50.437375  485527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1102 14:13:50.459140  485527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1102 14:13:50.477441  485527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1102 14:13:50.495259  485527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1102 14:13:50.513223  485527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1102 14:13:50.531962  485527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1102 14:13:50.550707  485527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1102 14:13:50.569897  485527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem --> /usr/share/ca-certificates/2951742.pem (1708 bytes)
	I1102 14:13:50.588078  485527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 14:13:50.606514  485527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem --> /usr/share/ca-certificates/295174.pem (1338 bytes)
	I1102 14:13:50.631042  485527 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1102 14:13:50.652211  485527 ssh_runner.go:195] Run: openssl version
	I1102 14:13:50.668641  485527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951742.pem && ln -fs /usr/share/ca-certificates/2951742.pem /etc/ssl/certs/2951742.pem"
	I1102 14:13:50.687630  485527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951742.pem
	I1102 14:13:50.692973  485527 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 13:20 /usr/share/ca-certificates/2951742.pem
	I1102 14:13:50.693082  485527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951742.pem
	I1102 14:13:50.773517  485527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951742.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 14:13:50.786859  485527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 14:13:50.803326  485527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:13:50.807607  485527 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 13:13 /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:13:50.807722  485527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:13:50.852283  485527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 14:13:50.865063  485527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295174.pem && ln -fs /usr/share/ca-certificates/295174.pem /etc/ssl/certs/295174.pem"
	I1102 14:13:50.875223  485527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295174.pem
	I1102 14:13:50.881253  485527 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 13:20 /usr/share/ca-certificates/295174.pem
	I1102 14:13:50.881355  485527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295174.pem
	I1102 14:13:50.923996  485527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295174.pem /etc/ssl/certs/51391683.0"
	I1102 14:13:50.932177  485527 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1102 14:13:50.936745  485527 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1102 14:13:50.936838  485527 kubeadm.go:401] StartCluster: {Name:embed-certs-955646 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-955646 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 14:13:50.936930  485527 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 14:13:50.937038  485527 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 14:13:50.969934  485527 cri.go:89] found id: ""
	I1102 14:13:50.970030  485527 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1102 14:13:50.979941  485527 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1102 14:13:50.988257  485527 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1102 14:13:50.988366  485527 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1102 14:13:50.998879  485527 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1102 14:13:50.998904  485527 kubeadm.go:158] found existing configuration files:
	
	I1102 14:13:50.998988  485527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1102 14:13:51.009365  485527 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1102 14:13:51.009482  485527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1102 14:13:51.019251  485527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1102 14:13:51.029248  485527 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1102 14:13:51.029358  485527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1102 14:13:51.038689  485527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1102 14:13:51.048425  485527 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1102 14:13:51.048546  485527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1102 14:13:51.056911  485527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1102 14:13:51.067183  485527 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1102 14:13:51.067278  485527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1102 14:13:51.076451  485527 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1102 14:13:51.137592  485527 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1102 14:13:51.138021  485527 kubeadm.go:319] [preflight] Running pre-flight checks
	I1102 14:13:51.175430  485527 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1102 14:13:51.175556  485527 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1102 14:13:51.175630  485527 kubeadm.go:319] OS: Linux
	I1102 14:13:51.175704  485527 kubeadm.go:319] CGROUPS_CPU: enabled
	I1102 14:13:51.175781  485527 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1102 14:13:51.175856  485527 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1102 14:13:51.175958  485527 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1102 14:13:51.176049  485527 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1102 14:13:51.176134  485527 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1102 14:13:51.176209  485527 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1102 14:13:51.176293  485527 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1102 14:13:51.176395  485527 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1102 14:13:51.326053  485527 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1102 14:13:51.326189  485527 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1102 14:13:51.326295  485527 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1102 14:13:51.337079  485527 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1102 14:13:51.344903  485527 out.go:252]   - Generating certificates and keys ...
	I1102 14:13:51.345008  485527 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1102 14:13:51.345086  485527 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1102 14:13:52.030135  485527 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1102 14:13:52.678119  485527 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	W1102 14:13:50.619361  483255 pod_ready.go:104] pod "coredns-66bc5c9577-wkgrq" is not "Ready", error: <nil>
	W1102 14:13:53.115535  483255 pod_ready.go:104] pod "coredns-66bc5c9577-wkgrq" is not "Ready", error: <nil>
	I1102 14:13:54.132407  485527 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1102 14:13:54.533062  485527 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1102 14:13:55.059909  485527 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1102 14:13:55.060204  485527 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-955646 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1102 14:13:55.560790  485527 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1102 14:13:55.561158  485527 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-955646 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1102 14:13:55.686926  485527 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1102 14:13:56.206274  485527 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1102 14:13:57.245537  485527 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1102 14:13:57.245824  485527 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	W1102 14:13:55.620686  483255 pod_ready.go:104] pod "coredns-66bc5c9577-wkgrq" is not "Ready", error: <nil>
	W1102 14:13:58.113081  483255 pod_ready.go:104] pod "coredns-66bc5c9577-wkgrq" is not "Ready", error: <nil>
	I1102 14:13:58.334585  485527 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1102 14:13:58.566559  485527 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1102 14:13:58.903058  485527 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1102 14:13:58.985656  485527 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1102 14:13:59.822095  485527 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1102 14:13:59.823366  485527 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1102 14:13:59.827645  485527 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1102 14:13:59.831369  485527 out.go:252]   - Booting up control plane ...
	I1102 14:13:59.831486  485527 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1102 14:13:59.831576  485527 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1102 14:13:59.831650  485527 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1102 14:13:59.848758  485527 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1102 14:13:59.848885  485527 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1102 14:13:59.857241  485527 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1102 14:13:59.857735  485527 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1102 14:13:59.857986  485527 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1102 14:13:59.999093  485527 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1102 14:13:59.999228  485527 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1102 14:14:02.002548  485527 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.00634763s
	I1102 14:14:02.006301  485527 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1102 14:14:02.006405  485527 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1102 14:14:02.006499  485527 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1102 14:14:02.006993  485527 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1102 14:14:00.119052  483255 pod_ready.go:104] pod "coredns-66bc5c9577-wkgrq" is not "Ready", error: <nil>
	W1102 14:14:02.612930  483255 pod_ready.go:104] pod "coredns-66bc5c9577-wkgrq" is not "Ready", error: <nil>
	I1102 14:14:05.444260  485527 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.437454897s
	I1102 14:14:07.265450  485527 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.25918457s
	W1102 14:14:04.614096  483255 pod_ready.go:104] pod "coredns-66bc5c9577-wkgrq" is not "Ready", error: <nil>
	W1102 14:14:07.114549  483255 pod_ready.go:104] pod "coredns-66bc5c9577-wkgrq" is not "Ready", error: <nil>
	I1102 14:14:09.010204  485527 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.003792482s
	I1102 14:14:09.047827  485527 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1102 14:14:09.074599  485527 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1102 14:14:09.088050  485527 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1102 14:14:09.088286  485527 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-955646 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1102 14:14:09.106382  485527 kubeadm.go:319] [bootstrap-token] Using token: jl4a55.p32103f3zsv15pzq
	I1102 14:14:09.109279  485527 out.go:252]   - Configuring RBAC rules ...
	I1102 14:14:09.109411  485527 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1102 14:14:09.124296  485527 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1102 14:14:09.134062  485527 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1102 14:14:09.139060  485527 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1102 14:14:09.145815  485527 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1102 14:14:09.150388  485527 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1102 14:14:09.429326  485527 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1102 14:14:09.880079  485527 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1102 14:14:10.429863  485527 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1102 14:14:10.431916  485527 kubeadm.go:319] 
	I1102 14:14:10.432002  485527 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1102 14:14:10.432013  485527 kubeadm.go:319] 
	I1102 14:14:10.432094  485527 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1102 14:14:10.432103  485527 kubeadm.go:319] 
	I1102 14:14:10.432131  485527 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1102 14:14:10.432197  485527 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1102 14:14:10.432257  485527 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1102 14:14:10.432267  485527 kubeadm.go:319] 
	I1102 14:14:10.432325  485527 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1102 14:14:10.432332  485527 kubeadm.go:319] 
	I1102 14:14:10.432382  485527 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1102 14:14:10.432387  485527 kubeadm.go:319] 
	I1102 14:14:10.432441  485527 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1102 14:14:10.432520  485527 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1102 14:14:10.432591  485527 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1102 14:14:10.432595  485527 kubeadm.go:319] 
	I1102 14:14:10.432683  485527 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1102 14:14:10.432764  485527 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1102 14:14:10.432769  485527 kubeadm.go:319] 
	I1102 14:14:10.432856  485527 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token jl4a55.p32103f3zsv15pzq \
	I1102 14:14:10.432981  485527 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:bd4a1f3bddc85f3fc83315ad33165a30aa1cba7ce55898ef9dad8dcc7e8d0eec \
	I1102 14:14:10.433004  485527 kubeadm.go:319] 	--control-plane 
	I1102 14:14:10.433009  485527 kubeadm.go:319] 
	I1102 14:14:10.433098  485527 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1102 14:14:10.433102  485527 kubeadm.go:319] 
	I1102 14:14:10.433188  485527 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token jl4a55.p32103f3zsv15pzq \
	I1102 14:14:10.433295  485527 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:bd4a1f3bddc85f3fc83315ad33165a30aa1cba7ce55898ef9dad8dcc7e8d0eec 
	I1102 14:14:10.436223  485527 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1102 14:14:10.436475  485527 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1102 14:14:10.436592  485527 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1102 14:14:10.436621  485527 cni.go:84] Creating CNI manager for ""
	I1102 14:14:10.436633  485527 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 14:14:10.439792  485527 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1102 14:14:10.442658  485527 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1102 14:14:10.446589  485527 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1102 14:14:10.446609  485527 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1102 14:14:10.464164  485527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1102 14:14:11.209340  485527 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1102 14:14:11.209475  485527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:14:11.209541  485527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-955646 minikube.k8s.io/updated_at=2025_11_02T14_14_11_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a minikube.k8s.io/name=embed-certs-955646 minikube.k8s.io/primary=true
	I1102 14:14:11.368263  485527 ops.go:34] apiserver oom_adj: -16
	I1102 14:14:11.368361  485527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:14:11.869342  485527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:14:12.368417  485527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1102 14:14:09.613896  483255 pod_ready.go:104] pod "coredns-66bc5c9577-wkgrq" is not "Ready", error: <nil>
	I1102 14:14:12.117080  483255 pod_ready.go:94] pod "coredns-66bc5c9577-wkgrq" is "Ready"
	I1102 14:14:12.117105  483255 pod_ready.go:86] duration metric: took 34.010006579s for pod "coredns-66bc5c9577-wkgrq" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:14:12.122404  483255 pod_ready.go:83] waiting for pod "etcd-no-preload-150469" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:14:12.127160  483255 pod_ready.go:94] pod "etcd-no-preload-150469" is "Ready"
	I1102 14:14:12.127189  483255 pod_ready.go:86] duration metric: took 4.76092ms for pod "etcd-no-preload-150469" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:14:12.129652  483255 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-150469" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:14:12.134401  483255 pod_ready.go:94] pod "kube-apiserver-no-preload-150469" is "Ready"
	I1102 14:14:12.134428  483255 pod_ready.go:86] duration metric: took 4.74716ms for pod "kube-apiserver-no-preload-150469" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:14:12.136544  483255 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-150469" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:14:12.310450  483255 pod_ready.go:94] pod "kube-controller-manager-no-preload-150469" is "Ready"
	I1102 14:14:12.310515  483255 pod_ready.go:86] duration metric: took 173.941953ms for pod "kube-controller-manager-no-preload-150469" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:14:12.510874  483255 pod_ready.go:83] waiting for pod "kube-proxy-qg9np" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:14:12.910918  483255 pod_ready.go:94] pod "kube-proxy-qg9np" is "Ready"
	I1102 14:14:12.910995  483255 pod_ready.go:86] duration metric: took 400.095019ms for pod "kube-proxy-qg9np" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:14:13.111501  483255 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-150469" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:14:13.510866  483255 pod_ready.go:94] pod "kube-scheduler-no-preload-150469" is "Ready"
	I1102 14:14:13.510895  483255 pod_ready.go:86] duration metric: took 399.363941ms for pod "kube-scheduler-no-preload-150469" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:14:13.510909  483255 pod_ready.go:40] duration metric: took 35.41107114s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 14:14:13.571824  483255 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1102 14:14:13.576781  483255 out.go:179] * Done! kubectl is now configured to use "no-preload-150469" cluster and "default" namespace by default
	I1102 14:14:12.869280  485527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:14:13.368470  485527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:14:13.868538  485527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:14:14.369286  485527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:14:14.869265  485527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:14:14.992444  485527 kubeadm.go:1114] duration metric: took 3.783010392s to wait for elevateKubeSystemPrivileges
	I1102 14:14:14.992478  485527 kubeadm.go:403] duration metric: took 24.055643897s to StartCluster
	I1102 14:14:14.992495  485527 settings.go:142] acquiring lock: {Name:mk95f66b3b15e63f58f8c9085c1ffe67cc396dc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:14:14.992559  485527 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-293314/kubeconfig
	I1102 14:14:14.993976  485527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/kubeconfig: {Name:mke5a65554da8fc0fd6a2ea60bed899d5b38ce09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:14:14.994223  485527 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 14:14:14.994321  485527 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1102 14:14:14.994587  485527 config.go:182] Loaded profile config "embed-certs-955646": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:14:14.994810  485527 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1102 14:14:14.994893  485527 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-955646"
	I1102 14:14:14.994909  485527 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-955646"
	I1102 14:14:14.994935  485527 host.go:66] Checking if "embed-certs-955646" exists ...
	I1102 14:14:14.995466  485527 cli_runner.go:164] Run: docker container inspect embed-certs-955646 --format={{.State.Status}}
	I1102 14:14:14.995806  485527 addons.go:70] Setting default-storageclass=true in profile "embed-certs-955646"
	I1102 14:14:14.995832  485527 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-955646"
	I1102 14:14:14.996115  485527 cli_runner.go:164] Run: docker container inspect embed-certs-955646 --format={{.State.Status}}
	I1102 14:14:14.998576  485527 out.go:179] * Verifying Kubernetes components...
	I1102 14:14:15.002840  485527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 14:14:15.050454  485527 addons.go:239] Setting addon default-storageclass=true in "embed-certs-955646"
	I1102 14:14:15.050500  485527 host.go:66] Checking if "embed-certs-955646" exists ...
	I1102 14:14:15.050971  485527 cli_runner.go:164] Run: docker container inspect embed-certs-955646 --format={{.State.Status}}
	I1102 14:14:15.061892  485527 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 14:14:15.065339  485527 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 14:14:15.065373  485527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1102 14:14:15.065479  485527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-955646
	I1102 14:14:15.097189  485527 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1102 14:14:15.097214  485527 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1102 14:14:15.097280  485527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-955646
	I1102 14:14:15.128393  485527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/embed-certs-955646/id_rsa Username:docker}
	I1102 14:14:15.133834  485527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/embed-certs-955646/id_rsa Username:docker}
	I1102 14:14:15.416232  485527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 14:14:15.416480  485527 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1102 14:14:15.423571  485527 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 14:14:15.484112  485527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1102 14:14:16.165176  485527 start.go:1013] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1102 14:14:16.167708  485527 node_ready.go:35] waiting up to 6m0s for node "embed-certs-955646" to be "Ready" ...
	I1102 14:14:16.208363  485527 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1102 14:14:16.211371  485527 addons.go:515] duration metric: took 1.216551483s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1102 14:14:16.669214  485527 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-955646" context rescaled to 1 replicas
	W1102 14:14:18.171000  485527 node_ready.go:57] node "embed-certs-955646" has "Ready":"False" status (will retry)
	W1102 14:14:20.672754  485527 node_ready.go:57] node "embed-certs-955646" has "Ready":"False" status (will retry)
	W1102 14:14:23.170811  485527 node_ready.go:57] node "embed-certs-955646" has "Ready":"False" status (will retry)
	W1102 14:14:25.171549  485527 node_ready.go:57] node "embed-certs-955646" has "Ready":"False" status (will retry)
	W1102 14:14:27.671568  485527 node_ready.go:57] node "embed-certs-955646" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 02 14:14:17 no-preload-150469 crio[680]: time="2025-11-02T14:14:17.433541166Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:14:17 no-preload-150469 crio[680]: time="2025-11-02T14:14:17.437152973Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:14:17 no-preload-150469 crio[680]: time="2025-11-02T14:14:17.437356815Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 14:14:17 no-preload-150469 crio[680]: time="2025-11-02T14:14:17.437510195Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:14:17 no-preload-150469 crio[680]: time="2025-11-02T14:14:17.442962173Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:14:17 no-preload-150469 crio[680]: time="2025-11-02T14:14:17.442998161Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 14:14:17 no-preload-150469 crio[680]: time="2025-11-02T14:14:17.443019462Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:14:17 no-preload-150469 crio[680]: time="2025-11-02T14:14:17.446099186Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:14:17 no-preload-150469 crio[680]: time="2025-11-02T14:14:17.446133771Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 14:14:17 no-preload-150469 crio[680]: time="2025-11-02T14:14:17.446157164Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:14:17 no-preload-150469 crio[680]: time="2025-11-02T14:14:17.449314517Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:14:17 no-preload-150469 crio[680]: time="2025-11-02T14:14:17.449347592Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 14:14:22 no-preload-150469 crio[680]: time="2025-11-02T14:14:22.036768546Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=edbb4c42-df8d-433b-8d9e-6d523e2c4aab name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:14:22 no-preload-150469 crio[680]: time="2025-11-02T14:14:22.037746984Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2319875c-b7a1-4291-9797-3ea11021733c name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:14:22 no-preload-150469 crio[680]: time="2025-11-02T14:14:22.039141967Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cwt4d/dashboard-metrics-scraper" id=4092539a-ce89-4ec1-b6b0-3be48c0228df name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:14:22 no-preload-150469 crio[680]: time="2025-11-02T14:14:22.039269918Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:14:22 no-preload-150469 crio[680]: time="2025-11-02T14:14:22.04684894Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:14:22 no-preload-150469 crio[680]: time="2025-11-02T14:14:22.047447264Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:14:22 no-preload-150469 crio[680]: time="2025-11-02T14:14:22.064635143Z" level=info msg="Created container 39bd3315676431d20c326f0fa08a65e0c6fe873bc142b56bb40acbe91691b013: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cwt4d/dashboard-metrics-scraper" id=4092539a-ce89-4ec1-b6b0-3be48c0228df name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:14:22 no-preload-150469 crio[680]: time="2025-11-02T14:14:22.070792884Z" level=info msg="Starting container: 39bd3315676431d20c326f0fa08a65e0c6fe873bc142b56bb40acbe91691b013" id=9848a94e-b78b-4b38-8c29-b7ed22c0e4d2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 14:14:22 no-preload-150469 crio[680]: time="2025-11-02T14:14:22.072741209Z" level=info msg="Started container" PID=1746 containerID=39bd3315676431d20c326f0fa08a65e0c6fe873bc142b56bb40acbe91691b013 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cwt4d/dashboard-metrics-scraper id=9848a94e-b78b-4b38-8c29-b7ed22c0e4d2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fa7aa7886844b47270819070de4911693f6e80fa90662bb694ce1257a79749a2
	Nov 02 14:14:22 no-preload-150469 conmon[1744]: conmon 39bd3315676431d20c32 <ninfo>: container 1746 exited with status 1
	Nov 02 14:14:22 no-preload-150469 crio[680]: time="2025-11-02T14:14:22.483836188Z" level=info msg="Removing container: 92f73f70d2f36d755e2871a57f722d79419cc0cb5b4d8ec45923e94e2457e620" id=534335d9-fd32-4c0a-9928-b839e8c4c474 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 02 14:14:22 no-preload-150469 crio[680]: time="2025-11-02T14:14:22.490769412Z" level=info msg="Error loading conmon cgroup of container 92f73f70d2f36d755e2871a57f722d79419cc0cb5b4d8ec45923e94e2457e620: cgroup deleted" id=534335d9-fd32-4c0a-9928-b839e8c4c474 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 02 14:14:22 no-preload-150469 crio[680]: time="2025-11-02T14:14:22.493746546Z" level=info msg="Removed container 92f73f70d2f36d755e2871a57f722d79419cc0cb5b4d8ec45923e94e2457e620: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cwt4d/dashboard-metrics-scraper" id=534335d9-fd32-4c0a-9928-b839e8c4c474 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	39bd331567643       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           6 seconds ago       Exited              dashboard-metrics-scraper   3                   fa7aa7886844b       dashboard-metrics-scraper-6ffb444bf9-cwt4d   kubernetes-dashboard
	bb15dea2065f6       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           21 seconds ago      Running             storage-provisioner         2                   9d26ea7dcdf3d       storage-provisioner                          kube-system
	c1a14fc8d34ef       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   36 seconds ago      Running             kubernetes-dashboard        0                   c6b43c3587b93       kubernetes-dashboard-855c9754f9-px4fq        kubernetes-dashboard
	5c16fe809384b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           51 seconds ago      Running             coredns                     1                   c460154d58cc7       coredns-66bc5c9577-wkgrq                     kube-system
	874992340b7bc       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           51 seconds ago      Running             busybox                     1                   b688764b9498d       busybox                                      default
	65de798ea26c1       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           51 seconds ago      Running             kindnet-cni                 1                   c22e86b99cac6       kindnet-vm84g                                kube-system
	8427b59e04cb8       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           51 seconds ago      Running             kube-proxy                  1                   5201af6ec853b       kube-proxy-qg9np                             kube-system
	7f54f601eb74c       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           51 seconds ago      Exited              storage-provisioner         1                   9d26ea7dcdf3d       storage-provisioner                          kube-system
	0d290268ce1ba       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           59 seconds ago      Running             kube-apiserver              1                   9086c7876f243       kube-apiserver-no-preload-150469             kube-system
	ae39d005c17d3       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           59 seconds ago      Running             etcd                        1                   c7dfe31180d59       etcd-no-preload-150469                       kube-system
	a519dcf9b13e8       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           59 seconds ago      Running             kube-scheduler              1                   6a12248093918       kube-scheduler-no-preload-150469             kube-system
	78689f8cb995b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           59 seconds ago      Running             kube-controller-manager     1                   eecc9581ca9b6       kube-controller-manager-no-preload-150469    kube-system
	
	
	==> coredns [5c16fe809384bbef876c2382a1b0a8984689ea91e90b0f11e1c3d5d2e31b593e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58920 - 32943 "HINFO IN 1499161906379287752.3893016657460999203. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013003229s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-150469
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-150469
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a
	                    minikube.k8s.io/name=no-preload-150469
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_02T14_12_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 02 Nov 2025 14:12:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-150469
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 02 Nov 2025 14:14:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 02 Nov 2025 14:14:06 +0000   Sun, 02 Nov 2025 14:12:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 02 Nov 2025 14:14:06 +0000   Sun, 02 Nov 2025 14:12:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 02 Nov 2025 14:14:06 +0000   Sun, 02 Nov 2025 14:12:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 02 Nov 2025 14:14:06 +0000   Sun, 02 Nov 2025 14:12:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-150469
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                132ded7b-9d34-4b24-9227-0ca0ca7ef647
	  Boot ID:                    4c303cf4-c0b6-4c0d-aa1a-d7509feb15e7
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-wkgrq                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     112s
	  kube-system                 etcd-no-preload-150469                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         119s
	  kube-system                 kindnet-vm84g                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-no-preload-150469              250m (12%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-no-preload-150469     200m (10%)    0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-qg9np                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-no-preload-150469              100m (5%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-cwt4d    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-px4fq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 111s                 kube-proxy       
	  Normal   Starting                 50s                  kube-proxy       
	  Normal   Starting                 2m4s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m4s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m4s (x8 over 2m4s)  kubelet          Node no-preload-150469 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m4s (x8 over 2m4s)  kubelet          Node no-preload-150469 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m4s (x8 over 2m4s)  kubelet          Node no-preload-150469 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    117s                 kubelet          Node no-preload-150469 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 117s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  117s                 kubelet          Node no-preload-150469 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     117s                 kubelet          Node no-preload-150469 status is now: NodeHasSufficientPID
	  Normal   Starting                 117s                 kubelet          Starting kubelet.
	  Normal   RegisteredNode           113s                 node-controller  Node no-preload-150469 event: Registered Node no-preload-150469 in Controller
	  Normal   NodeReady                98s                  kubelet          Node no-preload-150469 status is now: NodeReady
	  Normal   Starting                 61s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s (x8 over 61s)    kubelet          Node no-preload-150469 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s (x8 over 61s)    kubelet          Node no-preload-150469 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s (x8 over 61s)    kubelet          Node no-preload-150469 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           50s                  node-controller  Node no-preload-150469 event: Registered Node no-preload-150469 in Controller
	
	
	==> dmesg <==
	[Nov 2 13:52] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:54] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:55] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:56] overlayfs: idmapped layers are currently not supported
	[  +3.515963] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:57] overlayfs: idmapped layers are currently not supported
	[ +24.836033] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:58] overlayfs: idmapped layers are currently not supported
	[ +23.362553] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:59] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:01] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:02] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:03] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:04] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:06] overlayfs: idmapped layers are currently not supported
	[ +50.469589] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 2 14:07] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:08] overlayfs: idmapped layers are currently not supported
	[ +11.089512] overlayfs: idmapped layers are currently not supported
	[ +33.821233] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:09] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:10] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:11] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:13] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:14] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ae39d005c17d3eece4e0835d8098b6b121095716785eb6ec522a5afe4f89a68c] <==
	{"level":"warn","ts":"2025-11-02T14:13:32.630528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:32.675540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:32.729809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:32.761506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:32.789233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:32.825554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:32.855635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:32.883155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:32.909292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:32.951444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:32.970849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:33.012648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:33.066047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:33.118888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:33.149587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:33.189889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:33.206359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:33.230165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:33.263013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:33.309637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:33.366916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:33.400918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:33.430870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:33.455411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:33.580943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38014","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:14:28 up  2:57,  0 user,  load average: 3.62, 3.37, 2.89
	Linux no-preload-150469 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [65de798ea26c14d06e8e1ca4be95b06f036a986330e5ac827e686e19efdb4346] <==
	I1102 14:13:37.213992       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1102 14:13:37.214228       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1102 14:13:37.214352       1 main.go:148] setting mtu 1500 for CNI 
	I1102 14:13:37.214362       1 main.go:178] kindnetd IP family: "ipv4"
	I1102 14:13:37.214375       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-02T14:13:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1102 14:13:37.422122       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1102 14:13:37.422139       1 controller.go:381] "Waiting for informer caches to sync"
	I1102 14:13:37.422148       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1102 14:13:37.422425       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1102 14:14:07.422395       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1102 14:14:07.422395       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1102 14:14:07.422519       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1102 14:14:07.423749       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1102 14:14:08.823267       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1102 14:14:08.823362       1 metrics.go:72] Registering metrics
	I1102 14:14:08.823482       1 controller.go:711] "Syncing nftables rules"
	I1102 14:14:17.425539       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1102 14:14:17.425578       1 main.go:301] handling current node
	I1102 14:14:27.425523       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1102 14:14:27.425567       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0d290268ce1ba1c435beb6a5c872eb4214b0dab49611f26a980150c8cf765731] <==
	I1102 14:13:35.710331       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1102 14:13:35.710360       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1102 14:13:35.713955       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1102 14:13:35.729687       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1102 14:13:35.729888       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1102 14:13:35.729904       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1102 14:13:35.730001       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1102 14:13:35.730036       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1102 14:13:35.731162       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 14:13:35.731812       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1102 14:13:35.732310       1 aggregator.go:171] initial CRD sync complete...
	I1102 14:13:35.732328       1 autoregister_controller.go:144] Starting autoregister controller
	I1102 14:13:35.732334       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1102 14:13:35.732341       1 cache.go:39] Caches are synced for autoregister controller
	I1102 14:13:36.125761       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1102 14:13:36.133426       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1102 14:13:37.336613       1 controller.go:667] quota admission added evaluator for: namespaces
	I1102 14:13:37.561563       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1102 14:13:37.705667       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1102 14:13:37.741844       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1102 14:13:37.946154       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.238.92"}
	I1102 14:13:37.972687       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.184.177"}
	I1102 14:13:38.898583       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1102 14:13:38.972255       1 controller.go:667] quota admission added evaluator for: endpoints
	I1102 14:13:39.262470       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [78689f8cb995ba031b8e14be6ecf0557f861d2852066ab8bb9395ec9c1275bcc] <==
	I1102 14:13:38.861660       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1102 14:13:38.864300       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1102 14:13:38.867531       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1102 14:13:38.868710       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1102 14:13:38.868732       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1102 14:13:38.869847       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1102 14:13:38.869888       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1102 14:13:38.869897       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1102 14:13:38.872090       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1102 14:13:38.873263       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1102 14:13:38.881509       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1102 14:13:38.885808       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 14:13:38.894773       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1102 14:13:38.894897       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 14:13:38.897692       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1102 14:13:38.899152       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1102 14:13:38.901715       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1102 14:13:38.904008       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1102 14:13:38.906265       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1102 14:13:38.906274       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1102 14:13:38.912495       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1102 14:13:38.915752       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1102 14:13:38.928361       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 14:13:38.928429       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1102 14:13:38.928438       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [8427b59e04cb889ecd2b15bba53ef56dd6e97a4b0e3a181a69cb0987e6740e29] <==
	I1102 14:13:37.433562       1 server_linux.go:53] "Using iptables proxy"
	I1102 14:13:37.674034       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1102 14:13:37.802877       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1102 14:13:37.803039       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1102 14:13:37.803190       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1102 14:13:37.890935       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 14:13:37.890995       1 server_linux.go:132] "Using iptables Proxier"
	I1102 14:13:37.894957       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1102 14:13:37.895284       1 server.go:527] "Version info" version="v1.34.1"
	I1102 14:13:37.895299       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 14:13:37.896419       1 config.go:200] "Starting service config controller"
	I1102 14:13:37.896429       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1102 14:13:37.901478       1 config.go:106] "Starting endpoint slice config controller"
	I1102 14:13:37.901511       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1102 14:13:37.901538       1 config.go:403] "Starting serviceCIDR config controller"
	I1102 14:13:37.901543       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1102 14:13:37.901913       1 config.go:309] "Starting node config controller"
	I1102 14:13:37.901925       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1102 14:13:37.901931       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1102 14:13:37.999419       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1102 14:13:38.002738       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1102 14:13:38.002833       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a519dcf9b13e8f0169f57e526ea9548babc82276dc427bc14eda821e798d8cc0] <==
	I1102 14:13:32.020548       1 serving.go:386] Generated self-signed cert in-memory
	W1102 14:13:34.991001       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1102 14:13:34.991341       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1102 14:13:34.991383       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1102 14:13:34.991425       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1102 14:13:35.628730       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1102 14:13:35.629089       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 14:13:35.633216       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 14:13:35.657818       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 14:13:35.660744       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1102 14:13:35.660900       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1102 14:13:35.766036       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 02 14:13:40 no-preload-150469 kubelet[798]: W1102 14:13:40.029988     798 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/aa4ae44e602159b250d81a350b59577c08056d4e598bda926fd0010cba92fa48/crio-c6b43c3587b93b1f5adfc86039f75e8660fe9bb263675ca5f9a064ffe0ed754b WatchSource:0}: Error finding container c6b43c3587b93b1f5adfc86039f75e8660fe9bb263675ca5f9a064ffe0ed754b: Status 404 returned error can't find the container with id c6b43c3587b93b1f5adfc86039f75e8660fe9bb263675ca5f9a064ffe0ed754b
	Nov 02 14:13:41 no-preload-150469 kubelet[798]: I1102 14:13:41.756865     798 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 02 14:13:45 no-preload-150469 kubelet[798]: I1102 14:13:45.335237     798 scope.go:117] "RemoveContainer" containerID="2f58c2637b7d615787ac41135c3d5e606844f3c5d13f03acea3341044887ce4a"
	Nov 02 14:13:46 no-preload-150469 kubelet[798]: I1102 14:13:46.355009     798 scope.go:117] "RemoveContainer" containerID="2f58c2637b7d615787ac41135c3d5e606844f3c5d13f03acea3341044887ce4a"
	Nov 02 14:13:46 no-preload-150469 kubelet[798]: I1102 14:13:46.376120     798 scope.go:117] "RemoveContainer" containerID="269b475b98840f69bd2f5d81b21a41aebd8b8485116341ae5b6e85e0abe234fe"
	Nov 02 14:13:46 no-preload-150469 kubelet[798]: E1102 14:13:46.376382     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cwt4d_kubernetes-dashboard(1026aa11-6b87-41b2-bca7-44f8bd760fc9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cwt4d" podUID="1026aa11-6b87-41b2-bca7-44f8bd760fc9"
	Nov 02 14:13:47 no-preload-150469 kubelet[798]: I1102 14:13:47.361000     798 scope.go:117] "RemoveContainer" containerID="269b475b98840f69bd2f5d81b21a41aebd8b8485116341ae5b6e85e0abe234fe"
	Nov 02 14:13:47 no-preload-150469 kubelet[798]: E1102 14:13:47.365602     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cwt4d_kubernetes-dashboard(1026aa11-6b87-41b2-bca7-44f8bd760fc9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cwt4d" podUID="1026aa11-6b87-41b2-bca7-44f8bd760fc9"
	Nov 02 14:13:49 no-preload-150469 kubelet[798]: I1102 14:13:49.923286     798 scope.go:117] "RemoveContainer" containerID="269b475b98840f69bd2f5d81b21a41aebd8b8485116341ae5b6e85e0abe234fe"
	Nov 02 14:13:49 no-preload-150469 kubelet[798]: E1102 14:13:49.923452     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cwt4d_kubernetes-dashboard(1026aa11-6b87-41b2-bca7-44f8bd760fc9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cwt4d" podUID="1026aa11-6b87-41b2-bca7-44f8bd760fc9"
	Nov 02 14:14:01 no-preload-150469 kubelet[798]: I1102 14:14:01.033674     798 scope.go:117] "RemoveContainer" containerID="269b475b98840f69bd2f5d81b21a41aebd8b8485116341ae5b6e85e0abe234fe"
	Nov 02 14:14:01 no-preload-150469 kubelet[798]: I1102 14:14:01.427861     798 scope.go:117] "RemoveContainer" containerID="269b475b98840f69bd2f5d81b21a41aebd8b8485116341ae5b6e85e0abe234fe"
	Nov 02 14:14:01 no-preload-150469 kubelet[798]: I1102 14:14:01.428296     798 scope.go:117] "RemoveContainer" containerID="92f73f70d2f36d755e2871a57f722d79419cc0cb5b4d8ec45923e94e2457e620"
	Nov 02 14:14:01 no-preload-150469 kubelet[798]: E1102 14:14:01.428480     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cwt4d_kubernetes-dashboard(1026aa11-6b87-41b2-bca7-44f8bd760fc9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cwt4d" podUID="1026aa11-6b87-41b2-bca7-44f8bd760fc9"
	Nov 02 14:14:01 no-preload-150469 kubelet[798]: I1102 14:14:01.481503     798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-px4fq" podStartSLOduration=10.678669113 podStartE2EDuration="22.481474303s" podCreationTimestamp="2025-11-02 14:13:39 +0000 UTC" firstStartedPulling="2025-11-02 14:13:40.038058358 +0000 UTC m=+12.408311868" lastFinishedPulling="2025-11-02 14:13:51.84086354 +0000 UTC m=+24.211117058" observedRunningTime="2025-11-02 14:13:52.429960324 +0000 UTC m=+24.800213842" watchObservedRunningTime="2025-11-02 14:14:01.481474303 +0000 UTC m=+33.851727813"
	Nov 02 14:14:07 no-preload-150469 kubelet[798]: I1102 14:14:07.444306     798 scope.go:117] "RemoveContainer" containerID="7f54f601eb74c81b089dd8333c2ac1ee002c336d383d6ca8f01b893371d53820"
	Nov 02 14:14:09 no-preload-150469 kubelet[798]: I1102 14:14:09.923194     798 scope.go:117] "RemoveContainer" containerID="92f73f70d2f36d755e2871a57f722d79419cc0cb5b4d8ec45923e94e2457e620"
	Nov 02 14:14:09 no-preload-150469 kubelet[798]: E1102 14:14:09.923373     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cwt4d_kubernetes-dashboard(1026aa11-6b87-41b2-bca7-44f8bd760fc9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cwt4d" podUID="1026aa11-6b87-41b2-bca7-44f8bd760fc9"
	Nov 02 14:14:22 no-preload-150469 kubelet[798]: I1102 14:14:22.035386     798 scope.go:117] "RemoveContainer" containerID="92f73f70d2f36d755e2871a57f722d79419cc0cb5b4d8ec45923e94e2457e620"
	Nov 02 14:14:22 no-preload-150469 kubelet[798]: I1102 14:14:22.482026     798 scope.go:117] "RemoveContainer" containerID="92f73f70d2f36d755e2871a57f722d79419cc0cb5b4d8ec45923e94e2457e620"
	Nov 02 14:14:22 no-preload-150469 kubelet[798]: I1102 14:14:22.482313     798 scope.go:117] "RemoveContainer" containerID="39bd3315676431d20c326f0fa08a65e0c6fe873bc142b56bb40acbe91691b013"
	Nov 02 14:14:22 no-preload-150469 kubelet[798]: E1102 14:14:22.482471     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cwt4d_kubernetes-dashboard(1026aa11-6b87-41b2-bca7-44f8bd760fc9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cwt4d" podUID="1026aa11-6b87-41b2-bca7-44f8bd760fc9"
	Nov 02 14:14:25 no-preload-150469 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 02 14:14:26 no-preload-150469 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 02 14:14:26 no-preload-150469 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [c1a14fc8d34ef2b0318dcfde9cb1a935bc5bd449b2ddc86097fda87d37278646] <==
	2025/11/02 14:13:51 Using namespace: kubernetes-dashboard
	2025/11/02 14:13:51 Using in-cluster config to connect to apiserver
	2025/11/02 14:13:51 Using secret token for csrf signing
	2025/11/02 14:13:51 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/02 14:13:51 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/02 14:13:51 Successful initial request to the apiserver, version: v1.34.1
	2025/11/02 14:13:51 Generating JWE encryption key
	2025/11/02 14:13:51 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/02 14:13:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/02 14:13:52 Initializing JWE encryption key from synchronized object
	2025/11/02 14:13:52 Creating in-cluster Sidecar client
	2025/11/02 14:13:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/02 14:13:52 Serving insecurely on HTTP port: 9090
	2025/11/02 14:14:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/02 14:13:51 Starting overwatch
	
	
	==> storage-provisioner [7f54f601eb74c81b089dd8333c2ac1ee002c336d383d6ca8f01b893371d53820] <==
	I1102 14:13:37.247454       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1102 14:14:07.249780       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [bb15dea2065f6d7a3be2daadead55036243cbc62d492b1a23b588e6a235bebd0] <==
	I1102 14:14:07.504953       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1102 14:14:07.519141       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1102 14:14:07.519192       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1102 14:14:07.521567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:14:10.977567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:14:15.238903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:14:18.837495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:14:21.891215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:14:24.913072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:14:24.918260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1102 14:14:24.918498       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1102 14:14:24.918716       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-150469_ba4449b4-06f0-40b1-8d13-8fd20c3cb78c!
	I1102 14:14:24.919425       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8294f6b4-bba2-4f06-8d40-727928497485", APIVersion:"v1", ResourceVersion:"680", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-150469_ba4449b4-06f0-40b1-8d13-8fd20c3cb78c became leader
	W1102 14:14:24.926911       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:14:24.934650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1102 14:14:25.019215       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-150469_ba4449b4-06f0-40b1-8d13-8fd20c3cb78c!
	W1102 14:14:26.937729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:14:26.942804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:14:28.946688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:14:28.953656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-150469 -n no-preload-150469
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-150469 -n no-preload-150469: exit status 2 (392.600032ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-150469 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-150469
helpers_test.go:243: (dbg) docker inspect no-preload-150469:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "aa4ae44e602159b250d81a350b59577c08056d4e598bda926fd0010cba92fa48",
	        "Created": "2025-11-02T14:11:51.659937726Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 483385,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-02T14:13:19.26401586Z",
	            "FinishedAt": "2025-11-02T14:13:18.275064561Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/aa4ae44e602159b250d81a350b59577c08056d4e598bda926fd0010cba92fa48/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/aa4ae44e602159b250d81a350b59577c08056d4e598bda926fd0010cba92fa48/hostname",
	        "HostsPath": "/var/lib/docker/containers/aa4ae44e602159b250d81a350b59577c08056d4e598bda926fd0010cba92fa48/hosts",
	        "LogPath": "/var/lib/docker/containers/aa4ae44e602159b250d81a350b59577c08056d4e598bda926fd0010cba92fa48/aa4ae44e602159b250d81a350b59577c08056d4e598bda926fd0010cba92fa48-json.log",
	        "Name": "/no-preload-150469",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-150469:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-150469",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "aa4ae44e602159b250d81a350b59577c08056d4e598bda926fd0010cba92fa48",
	                "LowerDir": "/var/lib/docker/overlay2/8a6aaf28eb401f956308bc06ae686510e116c66e0d46b46263a0d8a79fbe08f8-init/diff:/var/lib/docker/overlay2/53058afe6acd7391f639f551e07f446f6a39a991b699aea18507cb21a1652e97/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8a6aaf28eb401f956308bc06ae686510e116c66e0d46b46263a0d8a79fbe08f8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8a6aaf28eb401f956308bc06ae686510e116c66e0d46b46263a0d8a79fbe08f8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8a6aaf28eb401f956308bc06ae686510e116c66e0d46b46263a0d8a79fbe08f8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-150469",
	                "Source": "/var/lib/docker/volumes/no-preload-150469/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-150469",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-150469",
	                "name.minikube.sigs.k8s.io": "no-preload-150469",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a96558b0c6411858feeba32ca4202ceab558fb3fadb76780a30510cdcfbb7a37",
	            "SandboxKey": "/var/run/docker/netns/a96558b0c641",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-150469": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:07:3d:1d:b8:4d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "04b125ad348b31edf412b7cd44a1ba32814c5e6b6c1a080d912d4d879cabcf90",
	                    "EndpointID": "9fd1467ede6a7380b6f34bda1444a1118552a98f3fadedfe588c2617362660bd",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-150469",
	                        "aa4ae44e6021"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-150469 -n no-preload-150469
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-150469 -n no-preload-150469: exit status 2 (442.651481ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-150469 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-150469 logs -n 25: (1.364401792s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ start   │ -p cert-expiration-114321 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                                        │ cert-expiration-114321   │ jenkins │ v1.37.0 │ 02 Nov 25 14:07 UTC │ 02 Nov 25 14:08 UTC │
	│ delete  │ -p force-systemd-env-263133                                                                                                                                                                                                                   │ force-systemd-env-263133 │ jenkins │ v1.37.0 │ 02 Nov 25 14:08 UTC │ 02 Nov 25 14:08 UTC │
	│ start   │ -p cert-options-935084 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-935084      │ jenkins │ v1.37.0 │ 02 Nov 25 14:08 UTC │ 02 Nov 25 14:09 UTC │
	│ ssh     │ cert-options-935084 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-935084      │ jenkins │ v1.37.0 │ 02 Nov 25 14:09 UTC │ 02 Nov 25 14:09 UTC │
	│ ssh     │ -p cert-options-935084 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-935084      │ jenkins │ v1.37.0 │ 02 Nov 25 14:09 UTC │ 02 Nov 25 14:09 UTC │
	│ delete  │ -p cert-options-935084                                                                                                                                                                                                                        │ cert-options-935084      │ jenkins │ v1.37.0 │ 02 Nov 25 14:09 UTC │ 02 Nov 25 14:09 UTC │
	│ start   │ -p old-k8s-version-873713 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:09 UTC │ 02 Nov 25 14:10 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-873713 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:10 UTC │                     │
	│ stop    │ -p old-k8s-version-873713 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:10 UTC │ 02 Nov 25 14:10 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-873713 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:10 UTC │ 02 Nov 25 14:10 UTC │
	│ start   │ -p old-k8s-version-873713 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:10 UTC │ 02 Nov 25 14:11 UTC │
	│ start   │ -p cert-expiration-114321 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-114321   │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │ 02 Nov 25 14:13 UTC │
	│ image   │ old-k8s-version-873713 image list --format=json                                                                                                                                                                                               │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │ 02 Nov 25 14:11 UTC │
	│ pause   │ -p old-k8s-version-873713 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │                     │
	│ delete  │ -p old-k8s-version-873713                                                                                                                                                                                                                     │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │ 02 Nov 25 14:11 UTC │
	│ delete  │ -p old-k8s-version-873713                                                                                                                                                                                                                     │ old-k8s-version-873713   │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │ 02 Nov 25 14:11 UTC │
	│ start   │ -p no-preload-150469 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-150469        │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │ 02 Nov 25 14:12 UTC │
	│ addons  │ enable metrics-server -p no-preload-150469 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-150469        │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │                     │
	│ stop    │ -p no-preload-150469 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-150469        │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │ 02 Nov 25 14:13 UTC │
	│ addons  │ enable dashboard -p no-preload-150469 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-150469        │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │ 02 Nov 25 14:13 UTC │
	│ start   │ -p no-preload-150469 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-150469        │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │ 02 Nov 25 14:14 UTC │
	│ delete  │ -p cert-expiration-114321                                                                                                                                                                                                                     │ cert-expiration-114321   │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │ 02 Nov 25 14:13 UTC │
	│ start   │ -p embed-certs-955646 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-955646       │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │                     │
	│ image   │ no-preload-150469 image list --format=json                                                                                                                                                                                                    │ no-preload-150469        │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │ 02 Nov 25 14:14 UTC │
	│ pause   │ -p no-preload-150469 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-150469        │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 14:13:32
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 14:13:32.802295  485527 out.go:360] Setting OutFile to fd 1 ...
	I1102 14:13:32.802402  485527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:13:32.802457  485527 out.go:374] Setting ErrFile to fd 2...
	I1102 14:13:32.802464  485527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:13:32.802724  485527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 14:13:32.803133  485527 out.go:368] Setting JSON to false
	I1102 14:13:32.804093  485527 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10565,"bootTime":1762082248,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1102 14:13:32.804153  485527 start.go:143] virtualization:  
	I1102 14:13:32.807816  485527 out.go:179] * [embed-certs-955646] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1102 14:13:32.812492  485527 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 14:13:32.812570  485527 notify.go:221] Checking for updates...
	I1102 14:13:32.819110  485527 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 14:13:32.822290  485527 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-293314/kubeconfig
	I1102 14:13:32.825432  485527 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-293314/.minikube
	I1102 14:13:32.828564  485527 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1102 14:13:32.831675  485527 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 14:13:32.835239  485527 config.go:182] Loaded profile config "no-preload-150469": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:13:32.835338  485527 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 14:13:32.895226  485527 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1102 14:13:32.895374  485527 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:13:33.009592  485527 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-02 14:13:32.992790107 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:13:33.009719  485527 docker.go:319] overlay module found
	I1102 14:13:33.013023  485527 out.go:179] * Using the docker driver based on user configuration
	I1102 14:13:33.015900  485527 start.go:309] selected driver: docker
	I1102 14:13:33.015920  485527 start.go:930] validating driver "docker" against <nil>
	I1102 14:13:33.015934  485527 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 14:13:33.016649  485527 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:13:33.134911  485527 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-02 14:13:33.123830709 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:13:33.135067  485527 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1102 14:13:33.135332  485527 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 14:13:33.138367  485527 out.go:179] * Using Docker driver with root privileges
	I1102 14:13:33.141236  485527 cni.go:84] Creating CNI manager for ""
	I1102 14:13:33.141314  485527 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 14:13:33.141329  485527 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1102 14:13:33.141417  485527 start.go:353] cluster config:
	{Name:embed-certs-955646 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-955646 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1102 14:13:33.144565  485527 out.go:179] * Starting "embed-certs-955646" primary control-plane node in "embed-certs-955646" cluster
	I1102 14:13:33.147386  485527 cache.go:124] Beginning downloading kic base image for docker with crio
	I1102 14:13:33.150432  485527 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1102 14:13:33.153359  485527 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 14:13:33.153418  485527 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1102 14:13:33.153437  485527 cache.go:59] Caching tarball of preloaded images
	I1102 14:13:33.153524  485527 preload.go:233] Found /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1102 14:13:33.153538  485527 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1102 14:13:33.153652  485527 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/config.json ...
	I1102 14:13:33.153676  485527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/config.json: {Name:mka4dc94076eff42daaba5da6a6a891c3a2e48ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:13:33.153830  485527 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1102 14:13:33.180403  485527 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1102 14:13:33.180429  485527 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1102 14:13:33.180442  485527 cache.go:233] Successfully downloaded all kic artifacts
	I1102 14:13:33.180465  485527 start.go:360] acquireMachinesLock for embed-certs-955646: {Name:mke26bb2e28d5dc8d577d151206240e9d92b1828 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 14:13:33.180580  485527 start.go:364] duration metric: took 94.598µs to acquireMachinesLock for "embed-certs-955646"
	I1102 14:13:33.180611  485527 start.go:93] Provisioning new machine with config: &{Name:embed-certs-955646 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-955646 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 14:13:33.180689  485527 start.go:125] createHost starting for "" (driver="docker")
	I1102 14:13:29.399527  483255 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1102 14:13:29.399557  483255 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1102 14:13:29.399630  483255 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-150469
	I1102 14:13:29.407598  483255 addons.go:239] Setting addon default-storageclass=true in "no-preload-150469"
	W1102 14:13:29.407620  483255 addons.go:248] addon default-storageclass should already be in state true
	I1102 14:13:29.407644  483255 host.go:66] Checking if "no-preload-150469" exists ...
	I1102 14:13:29.408056  483255 cli_runner.go:164] Run: docker container inspect no-preload-150469 --format={{.State.Status}}
	I1102 14:13:29.480216  483255 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/no-preload-150469/id_rsa Username:docker}
	I1102 14:13:29.492656  483255 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/no-preload-150469/id_rsa Username:docker}
	I1102 14:13:29.502368  483255 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1102 14:13:29.502389  483255 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1102 14:13:29.502453  483255 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-150469
	I1102 14:13:29.577540  483255 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33436 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/no-preload-150469/id_rsa Username:docker}
	I1102 14:13:29.774659  483255 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 14:13:29.811465  483255 node_ready.go:35] waiting up to 6m0s for node "no-preload-150469" to be "Ready" ...
	I1102 14:13:29.842862  483255 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1102 14:13:29.842936  483255 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1102 14:13:29.915412  483255 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 14:13:29.942323  483255 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1102 14:13:29.942352  483255 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1102 14:13:30.004196  483255 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1102 14:13:30.075519  483255 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1102 14:13:30.075542  483255 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1102 14:13:30.188726  483255 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1102 14:13:30.188753  483255 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1102 14:13:30.323456  483255 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1102 14:13:30.323532  483255 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1102 14:13:30.404845  483255 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1102 14:13:30.404866  483255 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1102 14:13:30.474506  483255 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1102 14:13:30.474527  483255 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1102 14:13:30.541461  483255 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1102 14:13:30.541482  483255 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1102 14:13:30.584446  483255 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 14:13:30.584517  483255 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1102 14:13:30.614357  483255 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 14:13:33.184018  485527 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1102 14:13:33.184242  485527 start.go:159] libmachine.API.Create for "embed-certs-955646" (driver="docker")
	I1102 14:13:33.184274  485527 client.go:173] LocalClient.Create starting
	I1102 14:13:33.184353  485527 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem
	I1102 14:13:33.184390  485527 main.go:143] libmachine: Decoding PEM data...
	I1102 14:13:33.184407  485527 main.go:143] libmachine: Parsing certificate...
	I1102 14:13:33.184463  485527 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem
	I1102 14:13:33.184487  485527 main.go:143] libmachine: Decoding PEM data...
	I1102 14:13:33.184500  485527 main.go:143] libmachine: Parsing certificate...
	I1102 14:13:33.184861  485527 cli_runner.go:164] Run: docker network inspect embed-certs-955646 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1102 14:13:33.219852  485527 cli_runner.go:211] docker network inspect embed-certs-955646 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1102 14:13:33.219942  485527 network_create.go:284] running [docker network inspect embed-certs-955646] to gather additional debugging logs...
	I1102 14:13:33.219959  485527 cli_runner.go:164] Run: docker network inspect embed-certs-955646
	W1102 14:13:33.248174  485527 cli_runner.go:211] docker network inspect embed-certs-955646 returned with exit code 1
	I1102 14:13:33.248214  485527 network_create.go:287] error running [docker network inspect embed-certs-955646]: docker network inspect embed-certs-955646: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-955646 not found
	I1102 14:13:33.248229  485527 network_create.go:289] output of [docker network inspect embed-certs-955646]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-955646 not found
	
	** /stderr **
	I1102 14:13:33.248326  485527 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 14:13:33.273741  485527 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ddf319108ac9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:66:f7:2d:49:67:ff} reservation:<nil>}
	I1102 14:13:33.274106  485527 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-30b945568040 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:b6:b2:b0:cb:49:d7} reservation:<nil>}
	I1102 14:13:33.274326  485527 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d23a3a2e266d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fa:42:95:8e:ae:52} reservation:<nil>}
	I1102 14:13:33.274600  485527 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-04b125ad348b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:22:2e:b7:29:19:5f} reservation:<nil>}
	I1102 14:13:33.275120  485527 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a7c050}
	I1102 14:13:33.275143  485527 network_create.go:124] attempt to create docker network embed-certs-955646 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1102 14:13:33.275203  485527 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-955646 embed-certs-955646
	I1102 14:13:33.372821  485527 network_create.go:108] docker network embed-certs-955646 192.168.85.0/24 created
	I1102 14:13:33.372858  485527 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-955646" container
	I1102 14:13:33.372943  485527 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1102 14:13:33.402912  485527 cli_runner.go:164] Run: docker volume create embed-certs-955646 --label name.minikube.sigs.k8s.io=embed-certs-955646 --label created_by.minikube.sigs.k8s.io=true
	I1102 14:13:33.432560  485527 oci.go:103] Successfully created a docker volume embed-certs-955646
	I1102 14:13:33.432646  485527 cli_runner.go:164] Run: docker run --rm --name embed-certs-955646-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-955646 --entrypoint /usr/bin/test -v embed-certs-955646:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1102 14:13:34.104198  485527 oci.go:107] Successfully prepared a docker volume embed-certs-955646
	I1102 14:13:34.104254  485527 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 14:13:34.104274  485527 kic.go:194] Starting extracting preloaded images to volume ...
	I1102 14:13:34.104351  485527 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-955646:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1102 14:13:35.613727  483255 node_ready.go:49] node "no-preload-150469" is "Ready"
	I1102 14:13:35.613753  483255 node_ready.go:38] duration metric: took 5.802193247s for node "no-preload-150469" to be "Ready" ...
	I1102 14:13:35.613767  483255 api_server.go:52] waiting for apiserver process to appear ...
	I1102 14:13:35.613825  483255 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 14:13:37.782183  483255 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.866739994s)
	I1102 14:13:37.782242  483255 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.778027539s)
	I1102 14:13:37.980345  483255 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.365907366s)
	I1102 14:13:37.980542  483255 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.366706464s)
	I1102 14:13:37.980564  483255 api_server.go:72] duration metric: took 8.67646007s to wait for apiserver process to appear ...
	I1102 14:13:37.980570  483255 api_server.go:88] waiting for apiserver healthz status ...
	I1102 14:13:37.980588  483255 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1102 14:13:37.986342  483255 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-150469 addons enable metrics-server
	
	I1102 14:13:37.991190  483255 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1102 14:13:37.996184  483255 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1102 14:13:37.996395  483255 addons.go:515] duration metric: took 8.691980645s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1102 14:13:37.999292  483255 api_server.go:141] control plane version: v1.34.1
	I1102 14:13:37.999322  483255 api_server.go:131] duration metric: took 18.745702ms to wait for apiserver health ...
	I1102 14:13:37.999331  483255 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 14:13:38.024155  483255 system_pods.go:59] 8 kube-system pods found
	I1102 14:13:38.024194  483255 system_pods.go:61] "coredns-66bc5c9577-wkgrq" [90029150-f8ff-484e-a449-fa19206ab6b2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 14:13:38.024204  483255 system_pods.go:61] "etcd-no-preload-150469" [a653ec43-d807-4f0b-8111-507615a63de3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 14:13:38.024212  483255 system_pods.go:61] "kindnet-vm84g" [31a16d1f-9be1-46bb-a911-452fc3e27389] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1102 14:13:38.024219  483255 system_pods.go:61] "kube-apiserver-no-preload-150469" [9668b04c-ce5d-4558-b3d1-3ddb2d40c8af] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 14:13:38.024227  483255 system_pods.go:61] "kube-controller-manager-no-preload-150469" [5cf5e00d-7b35-495f-b5f2-6c149ee77125] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 14:13:38.024233  483255 system_pods.go:61] "kube-proxy-qg9np" [28814102-d017-4d78-8904-c21855b52264] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1102 14:13:38.024240  483255 system_pods.go:61] "kube-scheduler-no-preload-150469" [5785a754-7fb9-46fd-865a-2756355ba605] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 14:13:38.024252  483255 system_pods.go:61] "storage-provisioner" [bb34ac47-56c9-416b-944a-90dc162bf553] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 14:13:38.024261  483255 system_pods.go:74] duration metric: took 24.923797ms to wait for pod list to return data ...
	I1102 14:13:38.024270  483255 default_sa.go:34] waiting for default service account to be created ...
	I1102 14:13:38.045081  483255 default_sa.go:45] found service account: "default"
	I1102 14:13:38.045111  483255 default_sa.go:55] duration metric: took 20.834007ms for default service account to be created ...
	I1102 14:13:38.045122  483255 system_pods.go:116] waiting for k8s-apps to be running ...
	I1102 14:13:38.049928  483255 system_pods.go:86] 8 kube-system pods found
	I1102 14:13:38.049980  483255 system_pods.go:89] "coredns-66bc5c9577-wkgrq" [90029150-f8ff-484e-a449-fa19206ab6b2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 14:13:38.049991  483255 system_pods.go:89] "etcd-no-preload-150469" [a653ec43-d807-4f0b-8111-507615a63de3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 14:13:38.050000  483255 system_pods.go:89] "kindnet-vm84g" [31a16d1f-9be1-46bb-a911-452fc3e27389] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1102 14:13:38.050007  483255 system_pods.go:89] "kube-apiserver-no-preload-150469" [9668b04c-ce5d-4558-b3d1-3ddb2d40c8af] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 14:13:38.050017  483255 system_pods.go:89] "kube-controller-manager-no-preload-150469" [5cf5e00d-7b35-495f-b5f2-6c149ee77125] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 14:13:38.050023  483255 system_pods.go:89] "kube-proxy-qg9np" [28814102-d017-4d78-8904-c21855b52264] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1102 14:13:38.050035  483255 system_pods.go:89] "kube-scheduler-no-preload-150469" [5785a754-7fb9-46fd-865a-2756355ba605] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 14:13:38.050042  483255 system_pods.go:89] "storage-provisioner" [bb34ac47-56c9-416b-944a-90dc162bf553] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 14:13:38.050053  483255 system_pods.go:126] duration metric: took 4.925407ms to wait for k8s-apps to be running ...
	I1102 14:13:38.050062  483255 system_svc.go:44] waiting for kubelet service to be running ....
	I1102 14:13:38.050119  483255 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 14:13:38.085874  483255 system_svc.go:56] duration metric: took 35.801748ms WaitForService to wait for kubelet
	I1102 14:13:38.085902  483255 kubeadm.go:587] duration metric: took 8.781795387s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 14:13:38.085919  483255 node_conditions.go:102] verifying NodePressure condition ...
	I1102 14:13:38.092337  483255 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1102 14:13:38.092432  483255 node_conditions.go:123] node cpu capacity is 2
	I1102 14:13:38.092463  483255 node_conditions.go:105] duration metric: took 6.538164ms to run NodePressure ...
	I1102 14:13:38.092516  483255 start.go:242] waiting for startup goroutines ...
	I1102 14:13:38.092551  483255 start.go:247] waiting for cluster config update ...
	I1102 14:13:38.092588  483255 start.go:256] writing updated cluster config ...
	I1102 14:13:38.093259  483255 ssh_runner.go:195] Run: rm -f paused
	I1102 14:13:38.099753  483255 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 14:13:38.107031  483255 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wkgrq" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:13:39.659628  485527 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-955646:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.555236688s)
	I1102 14:13:39.659662  485527 kic.go:203] duration metric: took 5.555384095s to extract preloaded images to volume ...
	W1102 14:13:39.659799  485527 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1102 14:13:39.659916  485527 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1102 14:13:39.771097  485527 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-955646 --name embed-certs-955646 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-955646 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-955646 --network embed-certs-955646 --ip 192.168.85.2 --volume embed-certs-955646:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1102 14:13:40.164386  485527 cli_runner.go:164] Run: docker container inspect embed-certs-955646 --format={{.State.Running}}
	I1102 14:13:40.186820  485527 cli_runner.go:164] Run: docker container inspect embed-certs-955646 --format={{.State.Status}}
	I1102 14:13:40.213678  485527 cli_runner.go:164] Run: docker exec embed-certs-955646 stat /var/lib/dpkg/alternatives/iptables
	I1102 14:13:40.271531  485527 oci.go:144] the created container "embed-certs-955646" has a running status.
	I1102 14:13:40.271559  485527 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21808-293314/.minikube/machines/embed-certs-955646/id_rsa...
	I1102 14:13:40.754016  485527 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21808-293314/.minikube/machines/embed-certs-955646/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1102 14:13:40.776203  485527 cli_runner.go:164] Run: docker container inspect embed-certs-955646 --format={{.State.Status}}
	I1102 14:13:40.798300  485527 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1102 14:13:40.798318  485527 kic_runner.go:114] Args: [docker exec --privileged embed-certs-955646 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1102 14:13:40.866858  485527 cli_runner.go:164] Run: docker container inspect embed-certs-955646 --format={{.State.Status}}
	I1102 14:13:40.892161  485527 machine.go:94] provisionDockerMachine start ...
	I1102 14:13:40.892336  485527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-955646
	I1102 14:13:40.917829  485527 main.go:143] libmachine: Using SSH client type: native
	I1102 14:13:40.918263  485527 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1102 14:13:40.918277  485527 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 14:13:40.918964  485527 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	W1102 14:13:40.114562  483255 pod_ready.go:104] pod "coredns-66bc5c9577-wkgrq" is not "Ready", error: <nil>
	W1102 14:13:42.114662  483255 pod_ready.go:104] pod "coredns-66bc5c9577-wkgrq" is not "Ready", error: <nil>
	I1102 14:13:44.096722  485527 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-955646
	
	I1102 14:13:44.096751  485527 ubuntu.go:182] provisioning hostname "embed-certs-955646"
	I1102 14:13:44.096819  485527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-955646
	I1102 14:13:44.132342  485527 main.go:143] libmachine: Using SSH client type: native
	I1102 14:13:44.132652  485527 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1102 14:13:44.132664  485527 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-955646 && echo "embed-certs-955646" | sudo tee /etc/hostname
	I1102 14:13:44.315196  485527 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-955646
	
	I1102 14:13:44.315395  485527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-955646
	I1102 14:13:44.340523  485527 main.go:143] libmachine: Using SSH client type: native
	I1102 14:13:44.340819  485527 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1102 14:13:44.340841  485527 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-955646' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-955646/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-955646' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 14:13:44.511822  485527 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 14:13:44.511855  485527 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-293314/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-293314/.minikube}
	I1102 14:13:44.511876  485527 ubuntu.go:190] setting up certificates
	I1102 14:13:44.511885  485527 provision.go:84] configureAuth start
	I1102 14:13:44.511966  485527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-955646
	I1102 14:13:44.535429  485527 provision.go:143] copyHostCerts
	I1102 14:13:44.535504  485527 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem, removing ...
	I1102 14:13:44.535519  485527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem
	I1102 14:13:44.535647  485527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem (1082 bytes)
	I1102 14:13:44.535757  485527 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem, removing ...
	I1102 14:13:44.535772  485527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem
	I1102 14:13:44.535805  485527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem (1123 bytes)
	I1102 14:13:44.535864  485527 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem, removing ...
	I1102 14:13:44.535873  485527 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem
	I1102 14:13:44.535897  485527 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem (1675 bytes)
	I1102 14:13:44.535956  485527 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem org=jenkins.embed-certs-955646 san=[127.0.0.1 192.168.85.2 embed-certs-955646 localhost minikube]
	I1102 14:13:44.880612  485527 provision.go:177] copyRemoteCerts
	I1102 14:13:44.880683  485527 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 14:13:44.880736  485527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-955646
	I1102 14:13:44.899640  485527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/embed-certs-955646/id_rsa Username:docker}
	I1102 14:13:45.081712  485527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1102 14:13:45.156422  485527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1102 14:13:45.184523  485527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1102 14:13:45.212890  485527 provision.go:87] duration metric: took 700.973875ms to configureAuth
	I1102 14:13:45.212924  485527 ubuntu.go:206] setting minikube options for container-runtime
	I1102 14:13:45.213191  485527 config.go:182] Loaded profile config "embed-certs-955646": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:13:45.213319  485527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-955646
	I1102 14:13:45.258902  485527 main.go:143] libmachine: Using SSH client type: native
	I1102 14:13:45.259294  485527 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33441 <nil> <nil>}
	I1102 14:13:45.259317  485527 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1102 14:13:45.669762  485527 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1102 14:13:45.669783  485527 machine.go:97] duration metric: took 4.777602507s to provisionDockerMachine
	I1102 14:13:45.669793  485527 client.go:176] duration metric: took 12.485507762s to LocalClient.Create
	I1102 14:13:45.669808  485527 start.go:167] duration metric: took 12.485567512s to libmachine.API.Create "embed-certs-955646"
	I1102 14:13:45.669815  485527 start.go:293] postStartSetup for "embed-certs-955646" (driver="docker")
	I1102 14:13:45.669825  485527 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1102 14:13:45.669903  485527 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1102 14:13:45.669945  485527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-955646
	I1102 14:13:45.704545  485527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/embed-certs-955646/id_rsa Username:docker}
	I1102 14:13:45.819475  485527 ssh_runner.go:195] Run: cat /etc/os-release
	I1102 14:13:45.823796  485527 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1102 14:13:45.823829  485527 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1102 14:13:45.823848  485527 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-293314/.minikube/addons for local assets ...
	I1102 14:13:45.823905  485527 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-293314/.minikube/files for local assets ...
	I1102 14:13:45.823999  485527 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem -> 2951742.pem in /etc/ssl/certs
	I1102 14:13:45.824106  485527 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1102 14:13:45.838551  485527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem --> /etc/ssl/certs/2951742.pem (1708 bytes)
	I1102 14:13:45.868070  485527 start.go:296] duration metric: took 198.24085ms for postStartSetup
	I1102 14:13:45.868605  485527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-955646
	I1102 14:13:45.894956  485527 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/config.json ...
	I1102 14:13:45.895259  485527 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 14:13:45.895301  485527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-955646
	I1102 14:13:45.916052  485527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/embed-certs-955646/id_rsa Username:docker}
	I1102 14:13:46.027279  485527 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1102 14:13:46.046247  485527 start.go:128] duration metric: took 12.865525601s to createHost
	I1102 14:13:46.046275  485527 start.go:83] releasing machines lock for "embed-certs-955646", held for 12.865680541s
	I1102 14:13:46.046359  485527 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-955646
	I1102 14:13:46.081171  485527 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem (1338 bytes)
	W1102 14:13:46.081252  485527 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174_empty.pem, impossibly tiny 0 bytes
	I1102 14:13:46.081262  485527 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem (1679 bytes)
	I1102 14:13:46.081288  485527 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem (1082 bytes)
	I1102 14:13:46.081313  485527 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem (1123 bytes)
	I1102 14:13:46.081340  485527 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem (1675 bytes)
	I1102 14:13:46.081383  485527 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem (1708 bytes)
	I1102 14:13:46.081447  485527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 14:13:46.081499  485527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-955646
	I1102 14:13:46.124257  485527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/embed-certs-955646/id_rsa Username:docker}
	I1102 14:13:46.253678  485527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem --> /usr/share/ca-certificates/295174.pem (1338 bytes)
	I1102 14:13:46.274377  485527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem --> /usr/share/ca-certificates/2951742.pem (1708 bytes)
	I1102 14:13:46.294274  485527 ssh_runner.go:195] Run: openssl version
	I1102 14:13:46.304154  485527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 14:13:46.314072  485527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:13:46.318734  485527 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 13:13 /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:13:46.318795  485527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:13:46.385977  485527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 14:13:46.396087  485527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295174.pem && ln -fs /usr/share/ca-certificates/295174.pem /etc/ssl/certs/295174.pem"
	I1102 14:13:46.408077  485527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295174.pem
	I1102 14:13:46.415166  485527 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 13:20 /usr/share/ca-certificates/295174.pem
	I1102 14:13:46.415292  485527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295174.pem
	I1102 14:13:46.471332  485527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295174.pem /etc/ssl/certs/51391683.0"
	I1102 14:13:46.487316  485527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951742.pem && ln -fs /usr/share/ca-certificates/2951742.pem /etc/ssl/certs/2951742.pem"
	I1102 14:13:46.498240  485527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951742.pem
	I1102 14:13:46.502673  485527 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 13:20 /usr/share/ca-certificates/2951742.pem
	I1102 14:13:46.502819  485527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951742.pem
	I1102 14:13:46.570448  485527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951742.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 14:13:46.585588  485527 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1102 14:13:46.593988  485527 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1102 14:13:46.597850  485527 ssh_runner.go:195] Run: cat /version.json
	I1102 14:13:46.597995  485527 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1102 14:13:46.715759  485527 ssh_runner.go:195] Run: systemctl --version
	I1102 14:13:46.723822  485527 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1102 14:13:46.802908  485527 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1102 14:13:46.811607  485527 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1102 14:13:46.811729  485527 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1102 14:13:46.848873  485527 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1102 14:13:46.848945  485527 start.go:496] detecting cgroup driver to use...
	I1102 14:13:46.848992  485527 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1102 14:13:46.849082  485527 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1102 14:13:46.878775  485527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1102 14:13:46.894133  485527 docker.go:218] disabling cri-docker service (if available) ...
	I1102 14:13:46.894246  485527 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1102 14:13:46.912886  485527 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1102 14:13:46.934345  485527 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1102 14:13:47.137669  485527 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1102 14:13:47.321202  485527 docker.go:234] disabling docker service ...
	I1102 14:13:47.321323  485527 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1102 14:13:47.350521  485527 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1102 14:13:47.373484  485527 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1102 14:13:47.523766  485527 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1102 14:13:47.678543  485527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1102 14:13:47.694291  485527 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1102 14:13:47.709924  485527 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1102 14:13:47.709989  485527 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:13:47.720093  485527 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1102 14:13:47.720159  485527 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:13:47.728942  485527 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:13:47.737693  485527 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:13:47.746684  485527 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1102 14:13:47.755032  485527 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:13:47.763761  485527 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:13:47.777232  485527 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:13:47.785926  485527 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1102 14:13:47.794373  485527 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1102 14:13:47.802880  485527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 14:13:47.946488  485527 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1102 14:13:48.161690  485527 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1102 14:13:48.161815  485527 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1102 14:13:48.166431  485527 start.go:564] Will wait 60s for crictl version
	I1102 14:13:48.166550  485527 ssh_runner.go:195] Run: which crictl
	I1102 14:13:48.172927  485527 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1102 14:13:48.220760  485527 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1102 14:13:48.220909  485527 ssh_runner.go:195] Run: crio --version
	I1102 14:13:48.261823  485527 ssh_runner.go:195] Run: crio --version
	I1102 14:13:48.314313  485527 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1102 14:13:44.122991  483255 pod_ready.go:104] pod "coredns-66bc5c9577-wkgrq" is not "Ready", error: <nil>
	W1102 14:13:46.130214  483255 pod_ready.go:104] pod "coredns-66bc5c9577-wkgrq" is not "Ready", error: <nil>
	W1102 14:13:48.613483  483255 pod_ready.go:104] pod "coredns-66bc5c9577-wkgrq" is not "Ready", error: <nil>
	I1102 14:13:48.317301  485527 cli_runner.go:164] Run: docker network inspect embed-certs-955646 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 14:13:48.337443  485527 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1102 14:13:48.344354  485527 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 14:13:48.356530  485527 kubeadm.go:884] updating cluster {Name:embed-certs-955646 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-955646 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1102 14:13:48.356655  485527 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 14:13:48.356718  485527 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 14:13:48.399263  485527 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 14:13:48.399290  485527 crio.go:433] Images already preloaded, skipping extraction
	I1102 14:13:48.399350  485527 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 14:13:48.430640  485527 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 14:13:48.430666  485527 cache_images.go:86] Images are preloaded, skipping loading
	I1102 14:13:48.430674  485527 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1102 14:13:48.430827  485527 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-955646 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-955646 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1102 14:13:48.430925  485527 ssh_runner.go:195] Run: crio config
	I1102 14:13:48.519494  485527 cni.go:84] Creating CNI manager for ""
	I1102 14:13:48.519515  485527 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 14:13:48.519532  485527 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1102 14:13:48.519557  485527 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-955646 NodeName:embed-certs-955646 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1102 14:13:48.519692  485527 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-955646"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1102 14:13:48.519766  485527 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1102 14:13:48.530670  485527 binaries.go:44] Found k8s binaries, skipping transfer
	I1102 14:13:48.530740  485527 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1102 14:13:48.539073  485527 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1102 14:13:48.552553  485527 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1102 14:13:48.567884  485527 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1102 14:13:48.585599  485527 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1102 14:13:48.590154  485527 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 14:13:48.600414  485527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 14:13:48.760295  485527 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 14:13:48.781817  485527 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646 for IP: 192.168.85.2
	I1102 14:13:48.781854  485527 certs.go:195] generating shared ca certs ...
	I1102 14:13:48.781888  485527 certs.go:227] acquiring lock for ca certs: {Name:mkead50075949a3cdc798f9c0149a2bc2638cbbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:13:48.782059  485527 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-293314/.minikube/ca.key
	I1102 14:13:48.782131  485527 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.key
	I1102 14:13:48.782146  485527 certs.go:257] generating profile certs ...
	I1102 14:13:48.782231  485527 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/client.key
	I1102 14:13:48.782266  485527 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/client.crt with IP's: []
	I1102 14:13:49.007900  485527 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/client.crt ...
	I1102 14:13:49.007935  485527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/client.crt: {Name:mke0ea81780bcd9a7eb9a3d40c551704279821ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:13:49.008198  485527 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/client.key ...
	I1102 14:13:49.008219  485527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/client.key: {Name:mk067a7e80433a57e6dd2da85af3ab351d6aaca5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:13:49.008376  485527 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/apiserver.key.07905a59
	I1102 14:13:49.008399  485527 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/apiserver.crt.07905a59 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1102 14:13:49.598864  485527 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/apiserver.crt.07905a59 ...
	I1102 14:13:49.598899  485527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/apiserver.crt.07905a59: {Name:mk154950a39c0088eff33710b0924263e0dcd771 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:13:49.599071  485527 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/apiserver.key.07905a59 ...
	I1102 14:13:49.599088  485527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/apiserver.key.07905a59: {Name:mk0452e6895b8e71589f0005962ff1e30a2cebcb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:13:49.599168  485527 certs.go:382] copying /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/apiserver.crt.07905a59 -> /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/apiserver.crt
	I1102 14:13:49.599270  485527 certs.go:386] copying /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/apiserver.key.07905a59 -> /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/apiserver.key
	I1102 14:13:49.599339  485527 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/proxy-client.key
	I1102 14:13:49.599359  485527 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/proxy-client.crt with IP's: []
	I1102 14:13:50.404316  485527 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/proxy-client.crt ...
	I1102 14:13:50.404350  485527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/proxy-client.crt: {Name:mkc94e7a5d26be4d93809990afbb05cbf5aed186 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:13:50.404552  485527 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/proxy-client.key ...
	I1102 14:13:50.404570  485527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/proxy-client.key: {Name:mkec4ac48faf3b53b73001f2fa818ea0bc0944a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:13:50.404826  485527 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem (1338 bytes)
	W1102 14:13:50.404886  485527 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174_empty.pem, impossibly tiny 0 bytes
	I1102 14:13:50.404904  485527 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem (1679 bytes)
	I1102 14:13:50.404942  485527 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem (1082 bytes)
	I1102 14:13:50.404988  485527 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem (1123 bytes)
	I1102 14:13:50.405019  485527 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem (1675 bytes)
	I1102 14:13:50.405085  485527 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem (1708 bytes)
	I1102 14:13:50.405678  485527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1102 14:13:50.437375  485527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1102 14:13:50.459140  485527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1102 14:13:50.477441  485527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1102 14:13:50.495259  485527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1102 14:13:50.513223  485527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1102 14:13:50.531962  485527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1102 14:13:50.550707  485527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1102 14:13:50.569897  485527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem --> /usr/share/ca-certificates/2951742.pem (1708 bytes)
	I1102 14:13:50.588078  485527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 14:13:50.606514  485527 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem --> /usr/share/ca-certificates/295174.pem (1338 bytes)
	I1102 14:13:50.631042  485527 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1102 14:13:50.652211  485527 ssh_runner.go:195] Run: openssl version
	I1102 14:13:50.668641  485527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951742.pem && ln -fs /usr/share/ca-certificates/2951742.pem /etc/ssl/certs/2951742.pem"
	I1102 14:13:50.687630  485527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951742.pem
	I1102 14:13:50.692973  485527 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 13:20 /usr/share/ca-certificates/2951742.pem
	I1102 14:13:50.693082  485527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951742.pem
	I1102 14:13:50.773517  485527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951742.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 14:13:50.786859  485527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 14:13:50.803326  485527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:13:50.807607  485527 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 13:13 /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:13:50.807722  485527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:13:50.852283  485527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 14:13:50.865063  485527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295174.pem && ln -fs /usr/share/ca-certificates/295174.pem /etc/ssl/certs/295174.pem"
	I1102 14:13:50.875223  485527 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295174.pem
	I1102 14:13:50.881253  485527 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 13:20 /usr/share/ca-certificates/295174.pem
	I1102 14:13:50.881355  485527 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295174.pem
	I1102 14:13:50.923996  485527 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295174.pem /etc/ssl/certs/51391683.0"
	I1102 14:13:50.932177  485527 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1102 14:13:50.936745  485527 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1102 14:13:50.936838  485527 kubeadm.go:401] StartCluster: {Name:embed-certs-955646 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-955646 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 14:13:50.936930  485527 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 14:13:50.937038  485527 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 14:13:50.969934  485527 cri.go:89] found id: ""
	I1102 14:13:50.970030  485527 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1102 14:13:50.979941  485527 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1102 14:13:50.988257  485527 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1102 14:13:50.988366  485527 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1102 14:13:50.998879  485527 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1102 14:13:50.998904  485527 kubeadm.go:158] found existing configuration files:
	
	I1102 14:13:50.998988  485527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1102 14:13:51.009365  485527 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1102 14:13:51.009482  485527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1102 14:13:51.019251  485527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1102 14:13:51.029248  485527 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1102 14:13:51.029358  485527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1102 14:13:51.038689  485527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1102 14:13:51.048425  485527 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1102 14:13:51.048546  485527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1102 14:13:51.056911  485527 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1102 14:13:51.067183  485527 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1102 14:13:51.067278  485527 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1102 14:13:51.076451  485527 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1102 14:13:51.137592  485527 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1102 14:13:51.138021  485527 kubeadm.go:319] [preflight] Running pre-flight checks
	I1102 14:13:51.175430  485527 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1102 14:13:51.175556  485527 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1102 14:13:51.175630  485527 kubeadm.go:319] OS: Linux
	I1102 14:13:51.175704  485527 kubeadm.go:319] CGROUPS_CPU: enabled
	I1102 14:13:51.175781  485527 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1102 14:13:51.175856  485527 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1102 14:13:51.175958  485527 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1102 14:13:51.176049  485527 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1102 14:13:51.176134  485527 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1102 14:13:51.176209  485527 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1102 14:13:51.176293  485527 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1102 14:13:51.176395  485527 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1102 14:13:51.326053  485527 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1102 14:13:51.326189  485527 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1102 14:13:51.326295  485527 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1102 14:13:51.337079  485527 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1102 14:13:51.344903  485527 out.go:252]   - Generating certificates and keys ...
	I1102 14:13:51.345008  485527 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1102 14:13:51.345086  485527 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1102 14:13:52.030135  485527 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1102 14:13:52.678119  485527 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	W1102 14:13:50.619361  483255 pod_ready.go:104] pod "coredns-66bc5c9577-wkgrq" is not "Ready", error: <nil>
	W1102 14:13:53.115535  483255 pod_ready.go:104] pod "coredns-66bc5c9577-wkgrq" is not "Ready", error: <nil>
	I1102 14:13:54.132407  485527 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1102 14:13:54.533062  485527 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1102 14:13:55.059909  485527 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1102 14:13:55.060204  485527 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-955646 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1102 14:13:55.560790  485527 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1102 14:13:55.561158  485527 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-955646 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1102 14:13:55.686926  485527 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1102 14:13:56.206274  485527 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1102 14:13:57.245537  485527 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1102 14:13:57.245824  485527 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	W1102 14:13:55.620686  483255 pod_ready.go:104] pod "coredns-66bc5c9577-wkgrq" is not "Ready", error: <nil>
	W1102 14:13:58.113081  483255 pod_ready.go:104] pod "coredns-66bc5c9577-wkgrq" is not "Ready", error: <nil>
	I1102 14:13:58.334585  485527 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1102 14:13:58.566559  485527 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1102 14:13:58.903058  485527 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1102 14:13:58.985656  485527 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1102 14:13:59.822095  485527 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1102 14:13:59.823366  485527 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1102 14:13:59.827645  485527 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1102 14:13:59.831369  485527 out.go:252]   - Booting up control plane ...
	I1102 14:13:59.831486  485527 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1102 14:13:59.831576  485527 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1102 14:13:59.831650  485527 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1102 14:13:59.848758  485527 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1102 14:13:59.848885  485527 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1102 14:13:59.857241  485527 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1102 14:13:59.857735  485527 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1102 14:13:59.857986  485527 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1102 14:13:59.999093  485527 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1102 14:13:59.999228  485527 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1102 14:14:02.002548  485527 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.00634763s
	I1102 14:14:02.006301  485527 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1102 14:14:02.006405  485527 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1102 14:14:02.006499  485527 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1102 14:14:02.006993  485527 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1102 14:14:00.119052  483255 pod_ready.go:104] pod "coredns-66bc5c9577-wkgrq" is not "Ready", error: <nil>
	W1102 14:14:02.612930  483255 pod_ready.go:104] pod "coredns-66bc5c9577-wkgrq" is not "Ready", error: <nil>
	I1102 14:14:05.444260  485527 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.437454897s
	I1102 14:14:07.265450  485527 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.25918457s
	W1102 14:14:04.614096  483255 pod_ready.go:104] pod "coredns-66bc5c9577-wkgrq" is not "Ready", error: <nil>
	W1102 14:14:07.114549  483255 pod_ready.go:104] pod "coredns-66bc5c9577-wkgrq" is not "Ready", error: <nil>
	I1102 14:14:09.010204  485527 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 7.003792482s
	I1102 14:14:09.047827  485527 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1102 14:14:09.074599  485527 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1102 14:14:09.088050  485527 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1102 14:14:09.088286  485527 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-955646 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1102 14:14:09.106382  485527 kubeadm.go:319] [bootstrap-token] Using token: jl4a55.p32103f3zsv15pzq
	I1102 14:14:09.109279  485527 out.go:252]   - Configuring RBAC rules ...
	I1102 14:14:09.109411  485527 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1102 14:14:09.124296  485527 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1102 14:14:09.134062  485527 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1102 14:14:09.139060  485527 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1102 14:14:09.145815  485527 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1102 14:14:09.150388  485527 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1102 14:14:09.429326  485527 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1102 14:14:09.880079  485527 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1102 14:14:10.429863  485527 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1102 14:14:10.431916  485527 kubeadm.go:319] 
	I1102 14:14:10.432002  485527 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1102 14:14:10.432013  485527 kubeadm.go:319] 
	I1102 14:14:10.432094  485527 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1102 14:14:10.432103  485527 kubeadm.go:319] 
	I1102 14:14:10.432131  485527 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1102 14:14:10.432197  485527 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1102 14:14:10.432257  485527 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1102 14:14:10.432267  485527 kubeadm.go:319] 
	I1102 14:14:10.432325  485527 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1102 14:14:10.432332  485527 kubeadm.go:319] 
	I1102 14:14:10.432382  485527 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1102 14:14:10.432387  485527 kubeadm.go:319] 
	I1102 14:14:10.432441  485527 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1102 14:14:10.432520  485527 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1102 14:14:10.432591  485527 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1102 14:14:10.432595  485527 kubeadm.go:319] 
	I1102 14:14:10.432683  485527 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1102 14:14:10.432764  485527 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1102 14:14:10.432769  485527 kubeadm.go:319] 
	I1102 14:14:10.432856  485527 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token jl4a55.p32103f3zsv15pzq \
	I1102 14:14:10.432981  485527 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:bd4a1f3bddc85f3fc83315ad33165a30aa1cba7ce55898ef9dad8dcc7e8d0eec \
	I1102 14:14:10.433004  485527 kubeadm.go:319] 	--control-plane 
	I1102 14:14:10.433009  485527 kubeadm.go:319] 
	I1102 14:14:10.433098  485527 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1102 14:14:10.433102  485527 kubeadm.go:319] 
	I1102 14:14:10.433188  485527 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token jl4a55.p32103f3zsv15pzq \
	I1102 14:14:10.433295  485527 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:bd4a1f3bddc85f3fc83315ad33165a30aa1cba7ce55898ef9dad8dcc7e8d0eec 
	I1102 14:14:10.436223  485527 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1102 14:14:10.436475  485527 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1102 14:14:10.436592  485527 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1102 14:14:10.436621  485527 cni.go:84] Creating CNI manager for ""
	I1102 14:14:10.436633  485527 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 14:14:10.439792  485527 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1102 14:14:10.442658  485527 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1102 14:14:10.446589  485527 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1102 14:14:10.446609  485527 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1102 14:14:10.464164  485527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1102 14:14:11.209340  485527 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1102 14:14:11.209475  485527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:14:11.209541  485527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-955646 minikube.k8s.io/updated_at=2025_11_02T14_14_11_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a minikube.k8s.io/name=embed-certs-955646 minikube.k8s.io/primary=true
	I1102 14:14:11.368263  485527 ops.go:34] apiserver oom_adj: -16
	I1102 14:14:11.368361  485527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:14:11.869342  485527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:14:12.368417  485527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1102 14:14:09.613896  483255 pod_ready.go:104] pod "coredns-66bc5c9577-wkgrq" is not "Ready", error: <nil>
	I1102 14:14:12.117080  483255 pod_ready.go:94] pod "coredns-66bc5c9577-wkgrq" is "Ready"
	I1102 14:14:12.117105  483255 pod_ready.go:86] duration metric: took 34.010006579s for pod "coredns-66bc5c9577-wkgrq" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:14:12.122404  483255 pod_ready.go:83] waiting for pod "etcd-no-preload-150469" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:14:12.127160  483255 pod_ready.go:94] pod "etcd-no-preload-150469" is "Ready"
	I1102 14:14:12.127189  483255 pod_ready.go:86] duration metric: took 4.76092ms for pod "etcd-no-preload-150469" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:14:12.129652  483255 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-150469" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:14:12.134401  483255 pod_ready.go:94] pod "kube-apiserver-no-preload-150469" is "Ready"
	I1102 14:14:12.134428  483255 pod_ready.go:86] duration metric: took 4.74716ms for pod "kube-apiserver-no-preload-150469" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:14:12.136544  483255 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-150469" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:14:12.310450  483255 pod_ready.go:94] pod "kube-controller-manager-no-preload-150469" is "Ready"
	I1102 14:14:12.310515  483255 pod_ready.go:86] duration metric: took 173.941953ms for pod "kube-controller-manager-no-preload-150469" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:14:12.510874  483255 pod_ready.go:83] waiting for pod "kube-proxy-qg9np" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:14:12.910918  483255 pod_ready.go:94] pod "kube-proxy-qg9np" is "Ready"
	I1102 14:14:12.910995  483255 pod_ready.go:86] duration metric: took 400.095019ms for pod "kube-proxy-qg9np" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:14:13.111501  483255 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-150469" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:14:13.510866  483255 pod_ready.go:94] pod "kube-scheduler-no-preload-150469" is "Ready"
	I1102 14:14:13.510895  483255 pod_ready.go:86] duration metric: took 399.363941ms for pod "kube-scheduler-no-preload-150469" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:14:13.510909  483255 pod_ready.go:40] duration metric: took 35.41107114s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 14:14:13.571824  483255 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1102 14:14:13.576781  483255 out.go:179] * Done! kubectl is now configured to use "no-preload-150469" cluster and "default" namespace by default
	I1102 14:14:12.869280  485527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:14:13.368470  485527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:14:13.868538  485527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:14:14.369286  485527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:14:14.869265  485527 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:14:14.992444  485527 kubeadm.go:1114] duration metric: took 3.783010392s to wait for elevateKubeSystemPrivileges
	I1102 14:14:14.992478  485527 kubeadm.go:403] duration metric: took 24.055643897s to StartCluster
	I1102 14:14:14.992495  485527 settings.go:142] acquiring lock: {Name:mk95f66b3b15e63f58f8c9085c1ffe67cc396dc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:14:14.992559  485527 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-293314/kubeconfig
	I1102 14:14:14.993976  485527 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/kubeconfig: {Name:mke5a65554da8fc0fd6a2ea60bed899d5b38ce09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:14:14.994223  485527 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 14:14:14.994321  485527 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1102 14:14:14.994587  485527 config.go:182] Loaded profile config "embed-certs-955646": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:14:14.994810  485527 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1102 14:14:14.994893  485527 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-955646"
	I1102 14:14:14.994909  485527 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-955646"
	I1102 14:14:14.994935  485527 host.go:66] Checking if "embed-certs-955646" exists ...
	I1102 14:14:14.995466  485527 cli_runner.go:164] Run: docker container inspect embed-certs-955646 --format={{.State.Status}}
	I1102 14:14:14.995806  485527 addons.go:70] Setting default-storageclass=true in profile "embed-certs-955646"
	I1102 14:14:14.995832  485527 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-955646"
	I1102 14:14:14.996115  485527 cli_runner.go:164] Run: docker container inspect embed-certs-955646 --format={{.State.Status}}
	I1102 14:14:14.998576  485527 out.go:179] * Verifying Kubernetes components...
	I1102 14:14:15.002840  485527 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 14:14:15.050454  485527 addons.go:239] Setting addon default-storageclass=true in "embed-certs-955646"
	I1102 14:14:15.050500  485527 host.go:66] Checking if "embed-certs-955646" exists ...
	I1102 14:14:15.050971  485527 cli_runner.go:164] Run: docker container inspect embed-certs-955646 --format={{.State.Status}}
	I1102 14:14:15.061892  485527 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 14:14:15.065339  485527 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 14:14:15.065373  485527 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1102 14:14:15.065479  485527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-955646
	I1102 14:14:15.097189  485527 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1102 14:14:15.097214  485527 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1102 14:14:15.097280  485527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-955646
	I1102 14:14:15.128393  485527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/embed-certs-955646/id_rsa Username:docker}
	I1102 14:14:15.133834  485527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33441 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/embed-certs-955646/id_rsa Username:docker}
	I1102 14:14:15.416232  485527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 14:14:15.416480  485527 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1102 14:14:15.423571  485527 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 14:14:15.484112  485527 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1102 14:14:16.165176  485527 start.go:1013] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1102 14:14:16.167708  485527 node_ready.go:35] waiting up to 6m0s for node "embed-certs-955646" to be "Ready" ...
	I1102 14:14:16.208363  485527 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1102 14:14:16.211371  485527 addons.go:515] duration metric: took 1.216551483s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1102 14:14:16.669214  485527 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-955646" context rescaled to 1 replicas
	W1102 14:14:18.171000  485527 node_ready.go:57] node "embed-certs-955646" has "Ready":"False" status (will retry)
	W1102 14:14:20.672754  485527 node_ready.go:57] node "embed-certs-955646" has "Ready":"False" status (will retry)
	W1102 14:14:23.170811  485527 node_ready.go:57] node "embed-certs-955646" has "Ready":"False" status (will retry)
	W1102 14:14:25.171549  485527 node_ready.go:57] node "embed-certs-955646" has "Ready":"False" status (will retry)
	W1102 14:14:27.671568  485527 node_ready.go:57] node "embed-certs-955646" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 02 14:14:17 no-preload-150469 crio[680]: time="2025-11-02T14:14:17.433541166Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:14:17 no-preload-150469 crio[680]: time="2025-11-02T14:14:17.437152973Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:14:17 no-preload-150469 crio[680]: time="2025-11-02T14:14:17.437356815Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 14:14:17 no-preload-150469 crio[680]: time="2025-11-02T14:14:17.437510195Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:14:17 no-preload-150469 crio[680]: time="2025-11-02T14:14:17.442962173Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:14:17 no-preload-150469 crio[680]: time="2025-11-02T14:14:17.442998161Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 14:14:17 no-preload-150469 crio[680]: time="2025-11-02T14:14:17.443019462Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:14:17 no-preload-150469 crio[680]: time="2025-11-02T14:14:17.446099186Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:14:17 no-preload-150469 crio[680]: time="2025-11-02T14:14:17.446133771Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 14:14:17 no-preload-150469 crio[680]: time="2025-11-02T14:14:17.446157164Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:14:17 no-preload-150469 crio[680]: time="2025-11-02T14:14:17.449314517Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:14:17 no-preload-150469 crio[680]: time="2025-11-02T14:14:17.449347592Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 14:14:22 no-preload-150469 crio[680]: time="2025-11-02T14:14:22.036768546Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=edbb4c42-df8d-433b-8d9e-6d523e2c4aab name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:14:22 no-preload-150469 crio[680]: time="2025-11-02T14:14:22.037746984Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2319875c-b7a1-4291-9797-3ea11021733c name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:14:22 no-preload-150469 crio[680]: time="2025-11-02T14:14:22.039141967Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cwt4d/dashboard-metrics-scraper" id=4092539a-ce89-4ec1-b6b0-3be48c0228df name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:14:22 no-preload-150469 crio[680]: time="2025-11-02T14:14:22.039269918Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:14:22 no-preload-150469 crio[680]: time="2025-11-02T14:14:22.04684894Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:14:22 no-preload-150469 crio[680]: time="2025-11-02T14:14:22.047447264Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:14:22 no-preload-150469 crio[680]: time="2025-11-02T14:14:22.064635143Z" level=info msg="Created container 39bd3315676431d20c326f0fa08a65e0c6fe873bc142b56bb40acbe91691b013: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cwt4d/dashboard-metrics-scraper" id=4092539a-ce89-4ec1-b6b0-3be48c0228df name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:14:22 no-preload-150469 crio[680]: time="2025-11-02T14:14:22.070792884Z" level=info msg="Starting container: 39bd3315676431d20c326f0fa08a65e0c6fe873bc142b56bb40acbe91691b013" id=9848a94e-b78b-4b38-8c29-b7ed22c0e4d2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 14:14:22 no-preload-150469 crio[680]: time="2025-11-02T14:14:22.072741209Z" level=info msg="Started container" PID=1746 containerID=39bd3315676431d20c326f0fa08a65e0c6fe873bc142b56bb40acbe91691b013 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cwt4d/dashboard-metrics-scraper id=9848a94e-b78b-4b38-8c29-b7ed22c0e4d2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fa7aa7886844b47270819070de4911693f6e80fa90662bb694ce1257a79749a2
	Nov 02 14:14:22 no-preload-150469 conmon[1744]: conmon 39bd3315676431d20c32 <ninfo>: container 1746 exited with status 1
	Nov 02 14:14:22 no-preload-150469 crio[680]: time="2025-11-02T14:14:22.483836188Z" level=info msg="Removing container: 92f73f70d2f36d755e2871a57f722d79419cc0cb5b4d8ec45923e94e2457e620" id=534335d9-fd32-4c0a-9928-b839e8c4c474 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 02 14:14:22 no-preload-150469 crio[680]: time="2025-11-02T14:14:22.490769412Z" level=info msg="Error loading conmon cgroup of container 92f73f70d2f36d755e2871a57f722d79419cc0cb5b4d8ec45923e94e2457e620: cgroup deleted" id=534335d9-fd32-4c0a-9928-b839e8c4c474 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 02 14:14:22 no-preload-150469 crio[680]: time="2025-11-02T14:14:22.493746546Z" level=info msg="Removed container 92f73f70d2f36d755e2871a57f722d79419cc0cb5b4d8ec45923e94e2457e620: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cwt4d/dashboard-metrics-scraper" id=534335d9-fd32-4c0a-9928-b839e8c4c474 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	39bd331567643       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           8 seconds ago        Exited              dashboard-metrics-scraper   3                   fa7aa7886844b       dashboard-metrics-scraper-6ffb444bf9-cwt4d   kubernetes-dashboard
	bb15dea2065f6       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           23 seconds ago       Running             storage-provisioner         2                   9d26ea7dcdf3d       storage-provisioner                          kube-system
	c1a14fc8d34ef       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   39 seconds ago       Running             kubernetes-dashboard        0                   c6b43c3587b93       kubernetes-dashboard-855c9754f9-px4fq        kubernetes-dashboard
	5c16fe809384b       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           53 seconds ago       Running             coredns                     1                   c460154d58cc7       coredns-66bc5c9577-wkgrq                     kube-system
	874992340b7bc       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago       Running             busybox                     1                   b688764b9498d       busybox                                      default
	65de798ea26c1       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           54 seconds ago       Running             kindnet-cni                 1                   c22e86b99cac6       kindnet-vm84g                                kube-system
	8427b59e04cb8       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           54 seconds ago       Running             kube-proxy                  1                   5201af6ec853b       kube-proxy-qg9np                             kube-system
	7f54f601eb74c       66749159455b3f08c8318fe0233122f54d0f5889f9c5fdfb73c3fd9d99895b51                                           54 seconds ago       Exited              storage-provisioner         1                   9d26ea7dcdf3d       storage-provisioner                          kube-system
	0d290268ce1ba       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   9086c7876f243       kube-apiserver-no-preload-150469             kube-system
	ae39d005c17d3       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   c7dfe31180d59       etcd-no-preload-150469                       kube-system
	a519dcf9b13e8       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   6a12248093918       kube-scheduler-no-preload-150469             kube-system
	78689f8cb995b       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   eecc9581ca9b6       kube-controller-manager-no-preload-150469    kube-system
	
	
	==> coredns [5c16fe809384bbef876c2382a1b0a8984689ea91e90b0f11e1c3d5d2e31b593e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58920 - 32943 "HINFO IN 1499161906379287752.3893016657460999203. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013003229s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-150469
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-150469
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a
	                    minikube.k8s.io/name=no-preload-150469
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_02T14_12_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 02 Nov 2025 14:12:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-150469
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 02 Nov 2025 14:14:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 02 Nov 2025 14:14:06 +0000   Sun, 02 Nov 2025 14:12:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 02 Nov 2025 14:14:06 +0000   Sun, 02 Nov 2025 14:12:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 02 Nov 2025 14:14:06 +0000   Sun, 02 Nov 2025 14:12:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 02 Nov 2025 14:14:06 +0000   Sun, 02 Nov 2025 14:12:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-150469
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                132ded7b-9d34-4b24-9227-0ca0ca7ef647
	  Boot ID:                    4c303cf4-c0b6-4c0d-aa1a-d7509feb15e7
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-wkgrq                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     115s
	  kube-system                 etcd-no-preload-150469                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m2s
	  kube-system                 kindnet-vm84g                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      115s
	  kube-system                 kube-apiserver-no-preload-150469              250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-no-preload-150469     200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-qg9np                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-scheduler-no-preload-150469              100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-cwt4d    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-px4fq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 113s                 kube-proxy       
	  Normal   Starting                 53s                  kube-proxy       
	  Normal   Starting                 2m7s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m7s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m7s (x8 over 2m7s)  kubelet          Node no-preload-150469 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m7s (x8 over 2m7s)  kubelet          Node no-preload-150469 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m7s (x8 over 2m7s)  kubelet          Node no-preload-150469 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    2m                   kubelet          Node no-preload-150469 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 2m                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m                   kubelet          Node no-preload-150469 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     2m                   kubelet          Node no-preload-150469 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m                   kubelet          Starting kubelet.
	  Normal   RegisteredNode           116s                 node-controller  Node no-preload-150469 event: Registered Node no-preload-150469 in Controller
	  Normal   NodeReady                101s                 kubelet          Node no-preload-150469 status is now: NodeReady
	  Normal   Starting                 64s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 64s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  63s (x8 over 64s)    kubelet          Node no-preload-150469 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s (x8 over 64s)    kubelet          Node no-preload-150469 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s (x8 over 64s)    kubelet          Node no-preload-150469 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           53s                  node-controller  Node no-preload-150469 event: Registered Node no-preload-150469 in Controller
	
	
	==> dmesg <==
	[Nov 2 13:52] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:54] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:55] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:56] overlayfs: idmapped layers are currently not supported
	[  +3.515963] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:57] overlayfs: idmapped layers are currently not supported
	[ +24.836033] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:58] overlayfs: idmapped layers are currently not supported
	[ +23.362553] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:59] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:01] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:02] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:03] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:04] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:06] overlayfs: idmapped layers are currently not supported
	[ +50.469589] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 2 14:07] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:08] overlayfs: idmapped layers are currently not supported
	[ +11.089512] overlayfs: idmapped layers are currently not supported
	[ +33.821233] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:09] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:10] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:11] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:13] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:14] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [ae39d005c17d3eece4e0835d8098b6b121095716785eb6ec522a5afe4f89a68c] <==
	{"level":"warn","ts":"2025-11-02T14:13:32.630528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:32.675540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:32.729809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:32.761506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:32.789233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:32.825554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:32.855635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:32.883155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:32.909292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:32.951444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:32.970849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:33.012648Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:33.066047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:33.118888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:33.149587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:33.189889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:33.206359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:33.230165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:33.263013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:33.309637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:33.366916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:33.400918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:33.430870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:33.455411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:13:33.580943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38014","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:14:31 up  2:57,  0 user,  load average: 3.62, 3.37, 2.89
	Linux no-preload-150469 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [65de798ea26c14d06e8e1ca4be95b06f036a986330e5ac827e686e19efdb4346] <==
	I1102 14:13:37.213992       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1102 14:13:37.214228       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1102 14:13:37.214352       1 main.go:148] setting mtu 1500 for CNI 
	I1102 14:13:37.214362       1 main.go:178] kindnetd IP family: "ipv4"
	I1102 14:13:37.214375       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-02T14:13:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1102 14:13:37.422122       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1102 14:13:37.422139       1 controller.go:381] "Waiting for informer caches to sync"
	I1102 14:13:37.422148       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1102 14:13:37.422425       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1102 14:14:07.422395       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1102 14:14:07.422395       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1102 14:14:07.422519       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1102 14:14:07.423749       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1102 14:14:08.823267       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1102 14:14:08.823362       1 metrics.go:72] Registering metrics
	I1102 14:14:08.823482       1 controller.go:711] "Syncing nftables rules"
	I1102 14:14:17.425539       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1102 14:14:17.425578       1 main.go:301] handling current node
	I1102 14:14:27.425523       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1102 14:14:27.425567       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0d290268ce1ba1c435beb6a5c872eb4214b0dab49611f26a980150c8cf765731] <==
	I1102 14:13:35.710331       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1102 14:13:35.710360       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1102 14:13:35.713955       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1102 14:13:35.729687       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1102 14:13:35.729888       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1102 14:13:35.729904       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1102 14:13:35.730001       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1102 14:13:35.730036       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1102 14:13:35.731162       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 14:13:35.731812       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1102 14:13:35.732310       1 aggregator.go:171] initial CRD sync complete...
	I1102 14:13:35.732328       1 autoregister_controller.go:144] Starting autoregister controller
	I1102 14:13:35.732334       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1102 14:13:35.732341       1 cache.go:39] Caches are synced for autoregister controller
	I1102 14:13:36.125761       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1102 14:13:36.133426       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1102 14:13:37.336613       1 controller.go:667] quota admission added evaluator for: namespaces
	I1102 14:13:37.561563       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1102 14:13:37.705667       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1102 14:13:37.741844       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1102 14:13:37.946154       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.104.238.92"}
	I1102 14:13:37.972687       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.184.177"}
	I1102 14:13:38.898583       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1102 14:13:38.972255       1 controller.go:667] quota admission added evaluator for: endpoints
	I1102 14:13:39.262470       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [78689f8cb995ba031b8e14be6ecf0557f861d2852066ab8bb9395ec9c1275bcc] <==
	I1102 14:13:38.861660       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1102 14:13:38.864300       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1102 14:13:38.867531       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1102 14:13:38.868710       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1102 14:13:38.868732       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1102 14:13:38.869847       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1102 14:13:38.869888       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1102 14:13:38.869897       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1102 14:13:38.872090       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1102 14:13:38.873263       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1102 14:13:38.881509       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1102 14:13:38.885808       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 14:13:38.894773       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1102 14:13:38.894897       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 14:13:38.897692       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1102 14:13:38.899152       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1102 14:13:38.901715       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1102 14:13:38.904008       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1102 14:13:38.906265       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1102 14:13:38.906274       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1102 14:13:38.912495       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1102 14:13:38.915752       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1102 14:13:38.928361       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 14:13:38.928429       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1102 14:13:38.928438       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-proxy [8427b59e04cb889ecd2b15bba53ef56dd6e97a4b0e3a181a69cb0987e6740e29] <==
	I1102 14:13:37.433562       1 server_linux.go:53] "Using iptables proxy"
	I1102 14:13:37.674034       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1102 14:13:37.802877       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1102 14:13:37.803039       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1102 14:13:37.803190       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1102 14:13:37.890935       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 14:13:37.890995       1 server_linux.go:132] "Using iptables Proxier"
	I1102 14:13:37.894957       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1102 14:13:37.895284       1 server.go:527] "Version info" version="v1.34.1"
	I1102 14:13:37.895299       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 14:13:37.896419       1 config.go:200] "Starting service config controller"
	I1102 14:13:37.896429       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1102 14:13:37.901478       1 config.go:106] "Starting endpoint slice config controller"
	I1102 14:13:37.901511       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1102 14:13:37.901538       1 config.go:403] "Starting serviceCIDR config controller"
	I1102 14:13:37.901543       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1102 14:13:37.901913       1 config.go:309] "Starting node config controller"
	I1102 14:13:37.901925       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1102 14:13:37.901931       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1102 14:13:37.999419       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1102 14:13:38.002738       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1102 14:13:38.002833       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a519dcf9b13e8f0169f57e526ea9548babc82276dc427bc14eda821e798d8cc0] <==
	I1102 14:13:32.020548       1 serving.go:386] Generated self-signed cert in-memory
	W1102 14:13:34.991001       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1102 14:13:34.991341       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1102 14:13:34.991383       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1102 14:13:34.991425       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1102 14:13:35.628730       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1102 14:13:35.629089       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 14:13:35.633216       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 14:13:35.657818       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 14:13:35.660744       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1102 14:13:35.660900       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1102 14:13:35.766036       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 02 14:13:40 no-preload-150469 kubelet[798]: W1102 14:13:40.029988     798 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/aa4ae44e602159b250d81a350b59577c08056d4e598bda926fd0010cba92fa48/crio-c6b43c3587b93b1f5adfc86039f75e8660fe9bb263675ca5f9a064ffe0ed754b WatchSource:0}: Error finding container c6b43c3587b93b1f5adfc86039f75e8660fe9bb263675ca5f9a064ffe0ed754b: Status 404 returned error can't find the container with id c6b43c3587b93b1f5adfc86039f75e8660fe9bb263675ca5f9a064ffe0ed754b
	Nov 02 14:13:41 no-preload-150469 kubelet[798]: I1102 14:13:41.756865     798 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 02 14:13:45 no-preload-150469 kubelet[798]: I1102 14:13:45.335237     798 scope.go:117] "RemoveContainer" containerID="2f58c2637b7d615787ac41135c3d5e606844f3c5d13f03acea3341044887ce4a"
	Nov 02 14:13:46 no-preload-150469 kubelet[798]: I1102 14:13:46.355009     798 scope.go:117] "RemoveContainer" containerID="2f58c2637b7d615787ac41135c3d5e606844f3c5d13f03acea3341044887ce4a"
	Nov 02 14:13:46 no-preload-150469 kubelet[798]: I1102 14:13:46.376120     798 scope.go:117] "RemoveContainer" containerID="269b475b98840f69bd2f5d81b21a41aebd8b8485116341ae5b6e85e0abe234fe"
	Nov 02 14:13:46 no-preload-150469 kubelet[798]: E1102 14:13:46.376382     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cwt4d_kubernetes-dashboard(1026aa11-6b87-41b2-bca7-44f8bd760fc9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cwt4d" podUID="1026aa11-6b87-41b2-bca7-44f8bd760fc9"
	Nov 02 14:13:47 no-preload-150469 kubelet[798]: I1102 14:13:47.361000     798 scope.go:117] "RemoveContainer" containerID="269b475b98840f69bd2f5d81b21a41aebd8b8485116341ae5b6e85e0abe234fe"
	Nov 02 14:13:47 no-preload-150469 kubelet[798]: E1102 14:13:47.365602     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cwt4d_kubernetes-dashboard(1026aa11-6b87-41b2-bca7-44f8bd760fc9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cwt4d" podUID="1026aa11-6b87-41b2-bca7-44f8bd760fc9"
	Nov 02 14:13:49 no-preload-150469 kubelet[798]: I1102 14:13:49.923286     798 scope.go:117] "RemoveContainer" containerID="269b475b98840f69bd2f5d81b21a41aebd8b8485116341ae5b6e85e0abe234fe"
	Nov 02 14:13:49 no-preload-150469 kubelet[798]: E1102 14:13:49.923452     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cwt4d_kubernetes-dashboard(1026aa11-6b87-41b2-bca7-44f8bd760fc9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cwt4d" podUID="1026aa11-6b87-41b2-bca7-44f8bd760fc9"
	Nov 02 14:14:01 no-preload-150469 kubelet[798]: I1102 14:14:01.033674     798 scope.go:117] "RemoveContainer" containerID="269b475b98840f69bd2f5d81b21a41aebd8b8485116341ae5b6e85e0abe234fe"
	Nov 02 14:14:01 no-preload-150469 kubelet[798]: I1102 14:14:01.427861     798 scope.go:117] "RemoveContainer" containerID="269b475b98840f69bd2f5d81b21a41aebd8b8485116341ae5b6e85e0abe234fe"
	Nov 02 14:14:01 no-preload-150469 kubelet[798]: I1102 14:14:01.428296     798 scope.go:117] "RemoveContainer" containerID="92f73f70d2f36d755e2871a57f722d79419cc0cb5b4d8ec45923e94e2457e620"
	Nov 02 14:14:01 no-preload-150469 kubelet[798]: E1102 14:14:01.428480     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cwt4d_kubernetes-dashboard(1026aa11-6b87-41b2-bca7-44f8bd760fc9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cwt4d" podUID="1026aa11-6b87-41b2-bca7-44f8bd760fc9"
	Nov 02 14:14:01 no-preload-150469 kubelet[798]: I1102 14:14:01.481503     798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-px4fq" podStartSLOduration=10.678669113 podStartE2EDuration="22.481474303s" podCreationTimestamp="2025-11-02 14:13:39 +0000 UTC" firstStartedPulling="2025-11-02 14:13:40.038058358 +0000 UTC m=+12.408311868" lastFinishedPulling="2025-11-02 14:13:51.84086354 +0000 UTC m=+24.211117058" observedRunningTime="2025-11-02 14:13:52.429960324 +0000 UTC m=+24.800213842" watchObservedRunningTime="2025-11-02 14:14:01.481474303 +0000 UTC m=+33.851727813"
	Nov 02 14:14:07 no-preload-150469 kubelet[798]: I1102 14:14:07.444306     798 scope.go:117] "RemoveContainer" containerID="7f54f601eb74c81b089dd8333c2ac1ee002c336d383d6ca8f01b893371d53820"
	Nov 02 14:14:09 no-preload-150469 kubelet[798]: I1102 14:14:09.923194     798 scope.go:117] "RemoveContainer" containerID="92f73f70d2f36d755e2871a57f722d79419cc0cb5b4d8ec45923e94e2457e620"
	Nov 02 14:14:09 no-preload-150469 kubelet[798]: E1102 14:14:09.923373     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cwt4d_kubernetes-dashboard(1026aa11-6b87-41b2-bca7-44f8bd760fc9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cwt4d" podUID="1026aa11-6b87-41b2-bca7-44f8bd760fc9"
	Nov 02 14:14:22 no-preload-150469 kubelet[798]: I1102 14:14:22.035386     798 scope.go:117] "RemoveContainer" containerID="92f73f70d2f36d755e2871a57f722d79419cc0cb5b4d8ec45923e94e2457e620"
	Nov 02 14:14:22 no-preload-150469 kubelet[798]: I1102 14:14:22.482026     798 scope.go:117] "RemoveContainer" containerID="92f73f70d2f36d755e2871a57f722d79419cc0cb5b4d8ec45923e94e2457e620"
	Nov 02 14:14:22 no-preload-150469 kubelet[798]: I1102 14:14:22.482313     798 scope.go:117] "RemoveContainer" containerID="39bd3315676431d20c326f0fa08a65e0c6fe873bc142b56bb40acbe91691b013"
	Nov 02 14:14:22 no-preload-150469 kubelet[798]: E1102 14:14:22.482471     798 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-cwt4d_kubernetes-dashboard(1026aa11-6b87-41b2-bca7-44f8bd760fc9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-cwt4d" podUID="1026aa11-6b87-41b2-bca7-44f8bd760fc9"
	Nov 02 14:14:25 no-preload-150469 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 02 14:14:26 no-preload-150469 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 02 14:14:26 no-preload-150469 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [c1a14fc8d34ef2b0318dcfde9cb1a935bc5bd449b2ddc86097fda87d37278646] <==
	2025/11/02 14:13:51 Starting overwatch
	2025/11/02 14:13:51 Using namespace: kubernetes-dashboard
	2025/11/02 14:13:51 Using in-cluster config to connect to apiserver
	2025/11/02 14:13:51 Using secret token for csrf signing
	2025/11/02 14:13:51 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/02 14:13:51 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/02 14:13:51 Successful initial request to the apiserver, version: v1.34.1
	2025/11/02 14:13:51 Generating JWE encryption key
	2025/11/02 14:13:51 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/02 14:13:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/02 14:13:52 Initializing JWE encryption key from synchronized object
	2025/11/02 14:13:52 Creating in-cluster Sidecar client
	2025/11/02 14:13:52 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/02 14:13:52 Serving insecurely on HTTP port: 9090
	2025/11/02 14:14:22 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [7f54f601eb74c81b089dd8333c2ac1ee002c336d383d6ca8f01b893371d53820] <==
	I1102 14:13:37.247454       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1102 14:14:07.249780       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [bb15dea2065f6d7a3be2daadead55036243cbc62d492b1a23b588e6a235bebd0] <==
	I1102 14:14:07.504953       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1102 14:14:07.519141       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1102 14:14:07.519192       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1102 14:14:07.521567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:14:10.977567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:14:15.238903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:14:18.837495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:14:21.891215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:14:24.913072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:14:24.918260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1102 14:14:24.918498       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1102 14:14:24.918716       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-150469_ba4449b4-06f0-40b1-8d13-8fd20c3cb78c!
	I1102 14:14:24.919425       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"8294f6b4-bba2-4f06-8d40-727928497485", APIVersion:"v1", ResourceVersion:"680", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-150469_ba4449b4-06f0-40b1-8d13-8fd20c3cb78c became leader
	W1102 14:14:24.926911       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:14:24.934650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1102 14:14:25.019215       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-150469_ba4449b4-06f0-40b1-8d13-8fd20c3cb78c!
	W1102 14:14:26.937729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:14:26.942804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:14:28.946688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:14:28.953656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:14:30.958281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:14:30.962904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-150469 -n no-preload-150469
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-150469 -n no-preload-150469: exit status 2 (409.238624ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-150469 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-955646 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-955646 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (322.366487ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T14:15:07Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-955646 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-955646 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-955646 describe deploy/metrics-server -n kube-system: exit status 1 (114.342146ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-955646 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-955646
helpers_test.go:243: (dbg) docker inspect embed-certs-955646:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "30c758ef671a78927180fb97bde8b1a9a094e365398db33e109ee628c96e6553",
	        "Created": "2025-11-02T14:13:39.788499711Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 486194,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-02T14:13:39.852701084Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/30c758ef671a78927180fb97bde8b1a9a094e365398db33e109ee628c96e6553/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/30c758ef671a78927180fb97bde8b1a9a094e365398db33e109ee628c96e6553/hostname",
	        "HostsPath": "/var/lib/docker/containers/30c758ef671a78927180fb97bde8b1a9a094e365398db33e109ee628c96e6553/hosts",
	        "LogPath": "/var/lib/docker/containers/30c758ef671a78927180fb97bde8b1a9a094e365398db33e109ee628c96e6553/30c758ef671a78927180fb97bde8b1a9a094e365398db33e109ee628c96e6553-json.log",
	        "Name": "/embed-certs-955646",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-955646:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-955646",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "30c758ef671a78927180fb97bde8b1a9a094e365398db33e109ee628c96e6553",
	                "LowerDir": "/var/lib/docker/overlay2/8c504e43823d68c8b3c159a922e06da89536ef8a80c163fcf27d6116fa985aa4-init/diff:/var/lib/docker/overlay2/53058afe6acd7391f639f551e07f446f6a39a991b699aea18507cb21a1652e97/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8c504e43823d68c8b3c159a922e06da89536ef8a80c163fcf27d6116fa985aa4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8c504e43823d68c8b3c159a922e06da89536ef8a80c163fcf27d6116fa985aa4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8c504e43823d68c8b3c159a922e06da89536ef8a80c163fcf27d6116fa985aa4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-955646",
	                "Source": "/var/lib/docker/volumes/embed-certs-955646/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-955646",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-955646",
	                "name.minikube.sigs.k8s.io": "embed-certs-955646",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d2f5ec15646de5e421a0fe57053763a40de0941bfd9812f67e70efd0c1ea8c45",
	            "SandboxKey": "/var/run/docker/netns/d2f5ec15646d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-955646": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:66:85:61:1c:8b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d85ba2fbd0cbee1971516307c8078f5176011d8f2e54e2718a749b7827caba3c",
	                    "EndpointID": "6e66e2a9a027e86e1ab6af93cb5c35019d490f28b01a06d26ca111de97291ff7",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-955646",
	                        "30c758ef671a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-955646 -n embed-certs-955646
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-955646 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-955646 logs -n 25: (1.653292613s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p cert-options-935084                                                                                                                                                                                                                        │ cert-options-935084          │ jenkins │ v1.37.0 │ 02 Nov 25 14:09 UTC │ 02 Nov 25 14:09 UTC │
	│ start   │ -p old-k8s-version-873713 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-873713       │ jenkins │ v1.37.0 │ 02 Nov 25 14:09 UTC │ 02 Nov 25 14:10 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-873713 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-873713       │ jenkins │ v1.37.0 │ 02 Nov 25 14:10 UTC │                     │
	│ stop    │ -p old-k8s-version-873713 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-873713       │ jenkins │ v1.37.0 │ 02 Nov 25 14:10 UTC │ 02 Nov 25 14:10 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-873713 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-873713       │ jenkins │ v1.37.0 │ 02 Nov 25 14:10 UTC │ 02 Nov 25 14:10 UTC │
	│ start   │ -p old-k8s-version-873713 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-873713       │ jenkins │ v1.37.0 │ 02 Nov 25 14:10 UTC │ 02 Nov 25 14:11 UTC │
	│ start   │ -p cert-expiration-114321 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-114321       │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │ 02 Nov 25 14:13 UTC │
	│ image   │ old-k8s-version-873713 image list --format=json                                                                                                                                                                                               │ old-k8s-version-873713       │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │ 02 Nov 25 14:11 UTC │
	│ pause   │ -p old-k8s-version-873713 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-873713       │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │                     │
	│ delete  │ -p old-k8s-version-873713                                                                                                                                                                                                                     │ old-k8s-version-873713       │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │ 02 Nov 25 14:11 UTC │
	│ delete  │ -p old-k8s-version-873713                                                                                                                                                                                                                     │ old-k8s-version-873713       │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │ 02 Nov 25 14:11 UTC │
	│ start   │ -p no-preload-150469 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │ 02 Nov 25 14:12 UTC │
	│ addons  │ enable metrics-server -p no-preload-150469 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │                     │
	│ stop    │ -p no-preload-150469 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │ 02 Nov 25 14:13 UTC │
	│ addons  │ enable dashboard -p no-preload-150469 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │ 02 Nov 25 14:13 UTC │
	│ start   │ -p no-preload-150469 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │ 02 Nov 25 14:14 UTC │
	│ delete  │ -p cert-expiration-114321                                                                                                                                                                                                                     │ cert-expiration-114321       │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │ 02 Nov 25 14:13 UTC │
	│ start   │ -p embed-certs-955646 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │ 02 Nov 25 14:14 UTC │
	│ image   │ no-preload-150469 image list --format=json                                                                                                                                                                                                    │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │ 02 Nov 25 14:14 UTC │
	│ pause   │ -p no-preload-150469 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │                     │
	│ delete  │ -p no-preload-150469                                                                                                                                                                                                                          │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │ 02 Nov 25 14:14 UTC │
	│ delete  │ -p no-preload-150469                                                                                                                                                                                                                          │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │ 02 Nov 25 14:14 UTC │
	│ delete  │ -p disable-driver-mounts-720030                                                                                                                                                                                                               │ disable-driver-mounts-720030 │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │ 02 Nov 25 14:14 UTC │
	│ start   │ -p default-k8s-diff-port-786183 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-955646 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:15 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 14:14:35
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 14:14:35.316665  490066 out.go:360] Setting OutFile to fd 1 ...
	I1102 14:14:35.316871  490066 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:14:35.316898  490066 out.go:374] Setting ErrFile to fd 2...
	I1102 14:14:35.316932  490066 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:14:35.317308  490066 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 14:14:35.317873  490066 out.go:368] Setting JSON to false
	I1102 14:14:35.319057  490066 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10628,"bootTime":1762082248,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1102 14:14:35.319164  490066 start.go:143] virtualization:  
	I1102 14:14:35.323296  490066 out.go:179] * [default-k8s-diff-port-786183] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1102 14:14:35.327739  490066 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 14:14:35.327860  490066 notify.go:221] Checking for updates...
	I1102 14:14:35.334134  490066 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 14:14:35.337261  490066 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-293314/kubeconfig
	I1102 14:14:35.340415  490066 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-293314/.minikube
	I1102 14:14:35.343440  490066 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1102 14:14:35.346436  490066 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 14:14:35.349923  490066 config.go:182] Loaded profile config "embed-certs-955646": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:14:35.350077  490066 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 14:14:35.378041  490066 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1102 14:14:35.378154  490066 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:14:35.443713  490066 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-02 14:14:35.43364649 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:14:35.443828  490066 docker.go:319] overlay module found
	I1102 14:14:35.447052  490066 out.go:179] * Using the docker driver based on user configuration
	I1102 14:14:35.450011  490066 start.go:309] selected driver: docker
	I1102 14:14:35.450029  490066 start.go:930] validating driver "docker" against <nil>
	I1102 14:14:35.450044  490066 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 14:14:35.450839  490066 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:14:35.514517  490066 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-02 14:14:35.503807283 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:14:35.514774  490066 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1102 14:14:35.515024  490066 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 14:14:35.518151  490066 out.go:179] * Using Docker driver with root privileges
	I1102 14:14:35.521064  490066 cni.go:84] Creating CNI manager for ""
	I1102 14:14:35.521132  490066 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 14:14:35.521145  490066 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1102 14:14:35.521240  490066 start.go:353] cluster config:
	{Name:default-k8s-diff-port-786183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-786183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 14:14:35.524378  490066 out.go:179] * Starting "default-k8s-diff-port-786183" primary control-plane node in "default-k8s-diff-port-786183" cluster
	I1102 14:14:35.527249  490066 cache.go:124] Beginning downloading kic base image for docker with crio
	I1102 14:14:35.530341  490066 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1102 14:14:35.533136  490066 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 14:14:35.533193  490066 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1102 14:14:35.533208  490066 cache.go:59] Caching tarball of preloaded images
	I1102 14:14:35.533295  490066 preload.go:233] Found /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1102 14:14:35.533310  490066 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1102 14:14:35.533448  490066 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/config.json ...
	I1102 14:14:35.533474  490066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/config.json: {Name:mkffb26da15cac28adb814319029ff2af5c7724c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:14:35.533639  490066 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1102 14:14:35.563597  490066 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1102 14:14:35.563620  490066 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1102 14:14:35.563634  490066 cache.go:233] Successfully downloaded all kic artifacts
	I1102 14:14:35.563656  490066 start.go:360] acquireMachinesLock for default-k8s-diff-port-786183: {Name:mk74a3791f8141b365a89e0370ddc0301da720d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 14:14:35.563767  490066 start.go:364] duration metric: took 91.152µs to acquireMachinesLock for "default-k8s-diff-port-786183"
	I1102 14:14:35.563798  490066 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-786183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-786183 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 14:14:35.563866  490066 start.go:125] createHost starting for "" (driver="docker")
	W1102 14:14:34.172491  485527 node_ready.go:57] node "embed-certs-955646" has "Ready":"False" status (will retry)
	W1102 14:14:36.674323  485527 node_ready.go:57] node "embed-certs-955646" has "Ready":"False" status (will retry)
	I1102 14:14:35.567492  490066 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1102 14:14:35.567760  490066 start.go:159] libmachine.API.Create for "default-k8s-diff-port-786183" (driver="docker")
	I1102 14:14:35.567801  490066 client.go:173] LocalClient.Create starting
	I1102 14:14:35.567946  490066 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem
	I1102 14:14:35.567995  490066 main.go:143] libmachine: Decoding PEM data...
	I1102 14:14:35.568013  490066 main.go:143] libmachine: Parsing certificate...
	I1102 14:14:35.568067  490066 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem
	I1102 14:14:35.568096  490066 main.go:143] libmachine: Decoding PEM data...
	I1102 14:14:35.568107  490066 main.go:143] libmachine: Parsing certificate...
	I1102 14:14:35.568489  490066 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-786183 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1102 14:14:35.585155  490066 cli_runner.go:211] docker network inspect default-k8s-diff-port-786183 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1102 14:14:35.585270  490066 network_create.go:284] running [docker network inspect default-k8s-diff-port-786183] to gather additional debugging logs...
	I1102 14:14:35.585293  490066 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-786183
	W1102 14:14:35.601590  490066 cli_runner.go:211] docker network inspect default-k8s-diff-port-786183 returned with exit code 1
	I1102 14:14:35.601634  490066 network_create.go:287] error running [docker network inspect default-k8s-diff-port-786183]: docker network inspect default-k8s-diff-port-786183: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-786183 not found
	I1102 14:14:35.601650  490066 network_create.go:289] output of [docker network inspect default-k8s-diff-port-786183]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-786183 not found
	
	** /stderr **
	I1102 14:14:35.601766  490066 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 14:14:35.618350  490066 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ddf319108ac9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:66:f7:2d:49:67:ff} reservation:<nil>}
	I1102 14:14:35.618873  490066 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-30b945568040 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:b6:b2:b0:cb:49:d7} reservation:<nil>}
	I1102 14:14:35.619261  490066 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d23a3a2e266d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fa:42:95:8e:ae:52} reservation:<nil>}
	I1102 14:14:35.619745  490066 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a0a150}
	I1102 14:14:35.619772  490066 network_create.go:124] attempt to create docker network default-k8s-diff-port-786183 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1102 14:14:35.619839  490066 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-786183 default-k8s-diff-port-786183
	I1102 14:14:35.690476  490066 network_create.go:108] docker network default-k8s-diff-port-786183 192.168.76.0/24 created
	I1102 14:14:35.690507  490066 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-786183" container
	I1102 14:14:35.690589  490066 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1102 14:14:35.707324  490066 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-786183 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-786183 --label created_by.minikube.sigs.k8s.io=true
	I1102 14:14:35.726706  490066 oci.go:103] Successfully created a docker volume default-k8s-diff-port-786183
	I1102 14:14:35.726797  490066 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-786183-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-786183 --entrypoint /usr/bin/test -v default-k8s-diff-port-786183:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1102 14:14:36.289358  490066 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-786183
	I1102 14:14:36.289397  490066 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 14:14:36.289417  490066 kic.go:194] Starting extracting preloaded images to volume ...
	I1102 14:14:36.289500  490066 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-786183:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	W1102 14:14:38.683208  485527 node_ready.go:57] node "embed-certs-955646" has "Ready":"False" status (will retry)
	W1102 14:14:41.171120  485527 node_ready.go:57] node "embed-certs-955646" has "Ready":"False" status (will retry)
	I1102 14:14:40.723705  490066 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-786183:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.434154395s)
	I1102 14:14:40.723736  490066 kic.go:203] duration metric: took 4.434315685s to extract preloaded images to volume ...
	W1102 14:14:40.723875  490066 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1102 14:14:40.723984  490066 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1102 14:14:40.781977  490066 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-786183 --name default-k8s-diff-port-786183 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-786183 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-786183 --network default-k8s-diff-port-786183 --ip 192.168.76.2 --volume default-k8s-diff-port-786183:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1102 14:14:41.138435  490066 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-786183 --format={{.State.Running}}
	I1102 14:14:41.160698  490066 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-786183 --format={{.State.Status}}
	I1102 14:14:41.184112  490066 cli_runner.go:164] Run: docker exec default-k8s-diff-port-786183 stat /var/lib/dpkg/alternatives/iptables
	I1102 14:14:41.238514  490066 oci.go:144] the created container "default-k8s-diff-port-786183" has a running status.
	I1102 14:14:41.238543  490066 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21808-293314/.minikube/machines/default-k8s-diff-port-786183/id_rsa...
	I1102 14:14:41.807447  490066 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21808-293314/.minikube/machines/default-k8s-diff-port-786183/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1102 14:14:41.829003  490066 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-786183 --format={{.State.Status}}
	I1102 14:14:41.846830  490066 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1102 14:14:41.846853  490066 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-786183 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1102 14:14:41.886128  490066 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-786183 --format={{.State.Status}}
	I1102 14:14:41.906157  490066 machine.go:94] provisionDockerMachine start ...
	I1102 14:14:41.906251  490066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-786183
	I1102 14:14:41.924856  490066 main.go:143] libmachine: Using SSH client type: native
	I1102 14:14:41.925225  490066 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33446 <nil> <nil>}
	I1102 14:14:41.925235  490066 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 14:14:41.926040  490066 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1102 14:14:45.096511  490066 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-786183
	
	I1102 14:14:45.096538  490066 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-786183"
	I1102 14:14:45.096627  490066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-786183
	I1102 14:14:45.127758  490066 main.go:143] libmachine: Using SSH client type: native
	I1102 14:14:45.128105  490066 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33446 <nil> <nil>}
	I1102 14:14:45.128127  490066 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-786183 && echo "default-k8s-diff-port-786183" | sudo tee /etc/hostname
	I1102 14:14:45.316420  490066 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-786183
	
	I1102 14:14:45.316717  490066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-786183
	W1102 14:14:43.171868  485527 node_ready.go:57] node "embed-certs-955646" has "Ready":"False" status (will retry)
	W1102 14:14:45.174915  485527 node_ready.go:57] node "embed-certs-955646" has "Ready":"False" status (will retry)
	W1102 14:14:47.685808  485527 node_ready.go:57] node "embed-certs-955646" has "Ready":"False" status (will retry)
	I1102 14:14:45.342455  490066 main.go:143] libmachine: Using SSH client type: native
	I1102 14:14:45.342843  490066 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33446 <nil> <nil>}
	I1102 14:14:45.342869  490066 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-786183' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-786183/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-786183' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 14:14:45.500011  490066 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 14:14:45.500084  490066 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-293314/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-293314/.minikube}
	I1102 14:14:45.500128  490066 ubuntu.go:190] setting up certificates
	I1102 14:14:45.500184  490066 provision.go:84] configureAuth start
	I1102 14:14:45.500270  490066 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-786183
	I1102 14:14:45.518056  490066 provision.go:143] copyHostCerts
	I1102 14:14:45.518125  490066 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem, removing ...
	I1102 14:14:45.518135  490066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem
	I1102 14:14:45.518215  490066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem (1082 bytes)
	I1102 14:14:45.518330  490066 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem, removing ...
	I1102 14:14:45.518337  490066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem
	I1102 14:14:45.518365  490066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem (1123 bytes)
	I1102 14:14:45.518427  490066 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem, removing ...
	I1102 14:14:45.518431  490066 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem
	I1102 14:14:45.518456  490066 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem (1675 bytes)
	I1102 14:14:45.518545  490066 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-786183 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-786183 localhost minikube]
	I1102 14:14:46.224754  490066 provision.go:177] copyRemoteCerts
	I1102 14:14:46.224828  490066 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 14:14:46.225048  490066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-786183
	I1102 14:14:46.242850  490066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/default-k8s-diff-port-786183/id_rsa Username:docker}
	I1102 14:14:46.350341  490066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1102 14:14:46.367217  490066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1102 14:14:46.384815  490066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1102 14:14:46.401727  490066 provision.go:87] duration metric: took 901.500744ms to configureAuth
	I1102 14:14:46.401833  490066 ubuntu.go:206] setting minikube options for container-runtime
	I1102 14:14:46.402031  490066 config.go:182] Loaded profile config "default-k8s-diff-port-786183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:14:46.402140  490066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-786183
	I1102 14:14:46.419355  490066 main.go:143] libmachine: Using SSH client type: native
	I1102 14:14:46.419662  490066 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33446 <nil> <nil>}
	I1102 14:14:46.419683  490066 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1102 14:14:46.786489  490066 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1102 14:14:46.786511  490066 machine.go:97] duration metric: took 4.880333341s to provisionDockerMachine
	I1102 14:14:46.786521  490066 client.go:176] duration metric: took 11.218710546s to LocalClient.Create
	I1102 14:14:46.786574  490066 start.go:167] duration metric: took 11.21877718s to libmachine.API.Create "default-k8s-diff-port-786183"
	I1102 14:14:46.786591  490066 start.go:293] postStartSetup for "default-k8s-diff-port-786183" (driver="docker")
	I1102 14:14:46.786603  490066 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1102 14:14:46.786733  490066 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1102 14:14:46.786803  490066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-786183
	I1102 14:14:46.805206  490066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/default-k8s-diff-port-786183/id_rsa Username:docker}
	I1102 14:14:46.912392  490066 ssh_runner.go:195] Run: cat /etc/os-release
	I1102 14:14:46.915829  490066 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1102 14:14:46.915862  490066 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1102 14:14:46.915874  490066 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-293314/.minikube/addons for local assets ...
	I1102 14:14:46.915934  490066 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-293314/.minikube/files for local assets ...
	I1102 14:14:46.916031  490066 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem -> 2951742.pem in /etc/ssl/certs
	I1102 14:14:46.916144  490066 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1102 14:14:46.924309  490066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem --> /etc/ssl/certs/2951742.pem (1708 bytes)
	I1102 14:14:46.942084  490066 start.go:296] duration metric: took 155.470616ms for postStartSetup
	I1102 14:14:46.942497  490066 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-786183
	I1102 14:14:46.960688  490066 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/config.json ...
	I1102 14:14:46.960965  490066 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 14:14:46.961007  490066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-786183
	I1102 14:14:46.980852  490066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/default-k8s-diff-port-786183/id_rsa Username:docker}
	I1102 14:14:47.084134  490066 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1102 14:14:47.089248  490066 start.go:128] duration metric: took 11.525366077s to createHost
	I1102 14:14:47.089278  490066 start.go:83] releasing machines lock for "default-k8s-diff-port-786183", held for 11.525496926s
	I1102 14:14:47.089354  490066 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-786183
	I1102 14:14:47.107841  490066 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem (1338 bytes)
	W1102 14:14:47.107904  490066 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174_empty.pem, impossibly tiny 0 bytes
	I1102 14:14:47.107914  490066 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem (1679 bytes)
	I1102 14:14:47.107939  490066 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem (1082 bytes)
	I1102 14:14:47.107968  490066 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem (1123 bytes)
	I1102 14:14:47.107996  490066 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem (1675 bytes)
	I1102 14:14:47.108051  490066 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem (1708 bytes)
	I1102 14:14:47.108118  490066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 14:14:47.108171  490066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-786183
	I1102 14:14:47.125024  490066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33446 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/default-k8s-diff-port-786183/id_rsa Username:docker}
	I1102 14:14:47.242150  490066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem --> /usr/share/ca-certificates/295174.pem (1338 bytes)
	I1102 14:14:47.260782  490066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem --> /usr/share/ca-certificates/2951742.pem (1708 bytes)
	I1102 14:14:47.279687  490066 ssh_runner.go:195] Run: openssl version
	I1102 14:14:47.286024  490066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 14:14:47.294655  490066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:14:47.298702  490066 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 13:13 /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:14:47.298770  490066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:14:47.340012  490066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 14:14:47.348661  490066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295174.pem && ln -fs /usr/share/ca-certificates/295174.pem /etc/ssl/certs/295174.pem"
	I1102 14:14:47.357423  490066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295174.pem
	I1102 14:14:47.361410  490066 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 13:20 /usr/share/ca-certificates/295174.pem
	I1102 14:14:47.361475  490066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295174.pem
	I1102 14:14:47.403431  490066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295174.pem /etc/ssl/certs/51391683.0"
	I1102 14:14:47.412162  490066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951742.pem && ln -fs /usr/share/ca-certificates/2951742.pem /etc/ssl/certs/2951742.pem"
	I1102 14:14:47.420619  490066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951742.pem
	I1102 14:14:47.424220  490066 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 13:20 /usr/share/ca-certificates/2951742.pem
	I1102 14:14:47.424334  490066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951742.pem
	I1102 14:14:47.465703  490066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951742.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 14:14:47.474056  490066 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1102 14:14:47.477342  490066 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1102 14:14:47.480748  490066 ssh_runner.go:195] Run: cat /version.json
	I1102 14:14:47.480824  490066 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1102 14:14:47.485077  490066 ssh_runner.go:195] Run: systemctl --version
	I1102 14:14:47.579961  490066 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1102 14:14:47.614885  490066 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1102 14:14:47.619685  490066 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1102 14:14:47.619777  490066 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1102 14:14:47.649903  490066 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1102 14:14:47.649927  490066 start.go:496] detecting cgroup driver to use...
	I1102 14:14:47.649959  490066 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1102 14:14:47.650010  490066 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1102 14:14:47.695386  490066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1102 14:14:47.719002  490066 docker.go:218] disabling cri-docker service (if available) ...
	I1102 14:14:47.719070  490066 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1102 14:14:47.746058  490066 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1102 14:14:47.770255  490066 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1102 14:14:47.890370  490066 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1102 14:14:48.037601  490066 docker.go:234] disabling docker service ...
	I1102 14:14:48.037716  490066 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1102 14:14:48.060204  490066 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1102 14:14:48.080593  490066 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1102 14:14:48.205626  490066 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1102 14:14:48.339309  490066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1102 14:14:48.352587  490066 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1102 14:14:48.368202  490066 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1102 14:14:48.368270  490066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:14:48.377550  490066 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1102 14:14:48.377671  490066 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:14:48.386959  490066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:14:48.396287  490066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:14:48.405120  490066 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1102 14:14:48.413440  490066 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:14:48.423112  490066 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:14:48.436778  490066 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:14:48.445854  490066 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1102 14:14:48.453156  490066 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1102 14:14:48.460422  490066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 14:14:48.581525  490066 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1102 14:14:48.715994  490066 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1102 14:14:48.716067  490066 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1102 14:14:48.720074  490066 start.go:564] Will wait 60s for crictl version
	I1102 14:14:48.720137  490066 ssh_runner.go:195] Run: which crictl
	I1102 14:14:48.723988  490066 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1102 14:14:48.752570  490066 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1102 14:14:48.752683  490066 ssh_runner.go:195] Run: crio --version
	I1102 14:14:48.781435  490066 ssh_runner.go:195] Run: crio --version
	I1102 14:14:48.818596  490066 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1102 14:14:48.821480  490066 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-786183 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 14:14:48.838453  490066 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1102 14:14:48.842462  490066 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 14:14:48.852529  490066 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-786183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-786183 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1102 14:14:48.852654  490066 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 14:14:48.852712  490066 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 14:14:48.890248  490066 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 14:14:48.890274  490066 crio.go:433] Images already preloaded, skipping extraction
	I1102 14:14:48.890328  490066 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 14:14:48.917073  490066 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 14:14:48.917100  490066 cache_images.go:86] Images are preloaded, skipping loading
	I1102 14:14:48.917108  490066 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1102 14:14:48.917257  490066 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-786183 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-786183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1102 14:14:48.917358  490066 ssh_runner.go:195] Run: crio config
	I1102 14:14:48.983929  490066 cni.go:84] Creating CNI manager for ""
	I1102 14:14:48.983953  490066 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 14:14:48.983964  490066 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1102 14:14:48.984006  490066 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-786183 NodeName:default-k8s-diff-port-786183 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1102 14:14:48.984177  490066 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-786183"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1102 14:14:48.984270  490066 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1102 14:14:48.994571  490066 binaries.go:44] Found k8s binaries, skipping transfer
	I1102 14:14:48.994729  490066 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1102 14:14:49.002739  490066 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1102 14:14:49.018108  490066 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1102 14:14:49.031735  490066 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1102 14:14:49.044443  490066 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1102 14:14:49.048068  490066 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 14:14:49.058187  490066 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 14:14:49.185674  490066 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 14:14:49.203410  490066 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183 for IP: 192.168.76.2
	I1102 14:14:49.203500  490066 certs.go:195] generating shared ca certs ...
	I1102 14:14:49.203533  490066 certs.go:227] acquiring lock for ca certs: {Name:mkead50075949a3cdc798f9c0149a2bc2638cbbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:14:49.203728  490066 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-293314/.minikube/ca.key
	I1102 14:14:49.203794  490066 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.key
	I1102 14:14:49.203817  490066 certs.go:257] generating profile certs ...
	I1102 14:14:49.203939  490066 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/client.key
	I1102 14:14:49.203973  490066 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/client.crt with IP's: []
	I1102 14:14:50.021982  490066 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/client.crt ...
	I1102 14:14:50.022018  490066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/client.crt: {Name:mk0b693b2ed1b812a874396da59ccae17e30c876 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:14:50.022233  490066 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/client.key ...
	I1102 14:14:50.022252  490066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/client.key: {Name:mk30ab7b7cf248b674e4ee84ffa949a1c170903c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:14:50.022358  490066 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/apiserver.key.995a17bc
	I1102 14:14:50.022379  490066 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/apiserver.crt.995a17bc with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1102 14:14:50.665800  490066 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/apiserver.crt.995a17bc ...
	I1102 14:14:50.665833  490066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/apiserver.crt.995a17bc: {Name:mkd13c2dedd5c00a144b755ebdfad7d74feded71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:14:50.666025  490066 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/apiserver.key.995a17bc ...
	I1102 14:14:50.666040  490066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/apiserver.key.995a17bc: {Name:mk6197a8069b1d21c0ee61aec89789fa53f03f37 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:14:50.666128  490066 certs.go:382] copying /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/apiserver.crt.995a17bc -> /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/apiserver.crt
	I1102 14:14:50.666203  490066 certs.go:386] copying /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/apiserver.key.995a17bc -> /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/apiserver.key
	I1102 14:14:50.666263  490066 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/proxy-client.key
	I1102 14:14:50.666283  490066 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/proxy-client.crt with IP's: []
	I1102 14:14:51.146040  490066 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/proxy-client.crt ...
	I1102 14:14:51.146073  490066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/proxy-client.crt: {Name:mk88a12ef120f1fd6a897109a90675732d1d09e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:14:51.146263  490066 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/proxy-client.key ...
	I1102 14:14:51.146282  490066 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/proxy-client.key: {Name:mk8562a9c36333cf61b0324b8a49f547b354fe6c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:14:51.146494  490066 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem (1338 bytes)
	W1102 14:14:51.146539  490066 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174_empty.pem, impossibly tiny 0 bytes
	I1102 14:14:51.146548  490066 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem (1679 bytes)
	I1102 14:14:51.146572  490066 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem (1082 bytes)
	I1102 14:14:51.146599  490066 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem (1123 bytes)
	I1102 14:14:51.146647  490066 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem (1675 bytes)
	I1102 14:14:51.146696  490066 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem (1708 bytes)
	I1102 14:14:51.147397  490066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1102 14:14:51.169140  490066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1102 14:14:51.190180  490066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1102 14:14:51.208701  490066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1102 14:14:51.228438  490066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1102 14:14:51.246208  490066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1102 14:14:51.266990  490066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1102 14:14:51.284835  490066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1102 14:14:51.302968  490066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem --> /usr/share/ca-certificates/295174.pem (1338 bytes)
	I1102 14:14:51.320447  490066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem --> /usr/share/ca-certificates/2951742.pem (1708 bytes)
	I1102 14:14:51.338064  490066 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 14:14:51.356389  490066 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1102 14:14:51.369760  490066 ssh_runner.go:195] Run: openssl version
	I1102 14:14:51.376153  490066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295174.pem && ln -fs /usr/share/ca-certificates/295174.pem /etc/ssl/certs/295174.pem"
	I1102 14:14:51.384795  490066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295174.pem
	I1102 14:14:51.388581  490066 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 13:20 /usr/share/ca-certificates/295174.pem
	I1102 14:14:51.388696  490066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295174.pem
	I1102 14:14:51.432782  490066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295174.pem /etc/ssl/certs/51391683.0"
	I1102 14:14:51.440736  490066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951742.pem && ln -fs /usr/share/ca-certificates/2951742.pem /etc/ssl/certs/2951742.pem"
	I1102 14:14:51.452700  490066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951742.pem
	I1102 14:14:51.460810  490066 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 13:20 /usr/share/ca-certificates/2951742.pem
	I1102 14:14:51.460879  490066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951742.pem
	I1102 14:14:51.501780  490066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951742.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 14:14:51.509911  490066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 14:14:51.518355  490066 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:14:51.522176  490066 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 13:13 /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:14:51.522281  490066 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:14:51.563406  490066 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 14:14:51.571488  490066 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1102 14:14:51.574998  490066 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1102 14:14:51.575051  490066 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-786183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-786183 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 14:14:51.575122  490066 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 14:14:51.575179  490066 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 14:14:51.602539  490066 cri.go:89] found id: ""
	I1102 14:14:51.602648  490066 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1102 14:14:51.610597  490066 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1102 14:14:51.618693  490066 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1102 14:14:51.618757  490066 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1102 14:14:51.626394  490066 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1102 14:14:51.626414  490066 kubeadm.go:158] found existing configuration files:
	
	I1102 14:14:51.626468  490066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1102 14:14:51.634509  490066 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1102 14:14:51.634577  490066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1102 14:14:51.642113  490066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1102 14:14:51.650425  490066 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1102 14:14:51.650511  490066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1102 14:14:51.657772  490066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1102 14:14:51.666483  490066 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1102 14:14:51.666594  490066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1102 14:14:51.675504  490066 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1102 14:14:51.683420  490066 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1102 14:14:51.683539  490066 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1102 14:14:51.691386  490066 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1102 14:14:51.731763  490066 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1102 14:14:51.732037  490066 kubeadm.go:319] [preflight] Running pre-flight checks
	I1102 14:14:51.756638  490066 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1102 14:14:51.756760  490066 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1102 14:14:51.756828  490066 kubeadm.go:319] OS: Linux
	I1102 14:14:51.756917  490066 kubeadm.go:319] CGROUPS_CPU: enabled
	I1102 14:14:51.757000  490066 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1102 14:14:51.757077  490066 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1102 14:14:51.757167  490066 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1102 14:14:51.757254  490066 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1102 14:14:51.757349  490066 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1102 14:14:51.757440  490066 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1102 14:14:51.757524  490066 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1102 14:14:51.757612  490066 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1102 14:14:51.829341  490066 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1102 14:14:51.829498  490066 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1102 14:14:51.829617  490066 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1102 14:14:51.838520  490066 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1102 14:14:50.171662  485527 node_ready.go:57] node "embed-certs-955646" has "Ready":"False" status (will retry)
	W1102 14:14:52.673029  485527 node_ready.go:57] node "embed-certs-955646" has "Ready":"False" status (will retry)
	I1102 14:14:51.844364  490066 out.go:252]   - Generating certificates and keys ...
	I1102 14:14:51.844466  490066 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1102 14:14:51.844545  490066 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1102 14:14:53.434739  490066 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1102 14:14:53.503953  490066 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1102 14:14:53.794232  490066 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1102 14:14:54.245314  490066 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1102 14:14:54.382113  490066 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1102 14:14:54.382490  490066 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-786183 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1102 14:14:54.701883  490066 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1102 14:14:54.702251  490066 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-786183 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1102 14:14:55.026364  490066 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1102 14:14:55.172785  490066 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	W1102 14:14:54.674349  485527 node_ready.go:57] node "embed-certs-955646" has "Ready":"False" status (will retry)
	I1102 14:14:56.673608  485527 node_ready.go:49] node "embed-certs-955646" is "Ready"
	I1102 14:14:56.673639  485527 node_ready.go:38] duration metric: took 40.505875293s for node "embed-certs-955646" to be "Ready" ...
	I1102 14:14:56.673653  485527 api_server.go:52] waiting for apiserver process to appear ...
	I1102 14:14:56.673713  485527 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 14:14:56.740136  485527 api_server.go:72] duration metric: took 41.745873559s to wait for apiserver process to appear ...
	I1102 14:14:56.740162  485527 api_server.go:88] waiting for apiserver healthz status ...
	I1102 14:14:56.740184  485527 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1102 14:14:56.754450  485527 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1102 14:14:56.755706  485527 api_server.go:141] control plane version: v1.34.1
	I1102 14:14:56.755732  485527 api_server.go:131] duration metric: took 15.561982ms to wait for apiserver health ...
	I1102 14:14:56.755742  485527 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 14:14:56.764833  485527 system_pods.go:59] 8 kube-system pods found
	I1102 14:14:56.764875  485527 system_pods.go:61] "coredns-66bc5c9577-h7hk7" [9aa7532b-e3d5-400c-aed7-e9a650360cbb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 14:14:56.764882  485527 system_pods.go:61] "etcd-embed-certs-955646" [87bf4ca4-25bc-43b4-8570-b5eca3eede89] Running
	I1102 14:14:56.764891  485527 system_pods.go:61] "kindnet-fvxzq" [9738d225-9797-4e3e-abf3-6f04f63c0a9b] Running
	I1102 14:14:56.764898  485527 system_pods.go:61] "kube-apiserver-embed-certs-955646" [b391b75e-95ca-494b-b02f-7f3d76d7b971] Running
	I1102 14:14:56.764905  485527 system_pods.go:61] "kube-controller-manager-embed-certs-955646" [59f3ec73-aedb-4421-84ce-2772c47a3388] Running
	I1102 14:14:56.764912  485527 system_pods.go:61] "kube-proxy-hg44j" [b0fcfb9f-3864-406a-b0ec-c9c56864fcbd] Running
	I1102 14:14:56.764917  485527 system_pods.go:61] "kube-scheduler-embed-certs-955646" [4e908594-08ca-438d-8277-b65e9e87ef49] Running
	I1102 14:14:56.764930  485527 system_pods.go:61] "storage-provisioner" [3b29b058-9c13-4c95-9c23-b738213b2020] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 14:14:56.764936  485527 system_pods.go:74] duration metric: took 9.188623ms to wait for pod list to return data ...
	I1102 14:14:56.764948  485527 default_sa.go:34] waiting for default service account to be created ...
	I1102 14:14:56.775639  485527 default_sa.go:45] found service account: "default"
	I1102 14:14:56.775668  485527 default_sa.go:55] duration metric: took 10.713421ms for default service account to be created ...
	I1102 14:14:56.775679  485527 system_pods.go:116] waiting for k8s-apps to be running ...
	I1102 14:14:56.779726  485527 system_pods.go:86] 8 kube-system pods found
	I1102 14:14:56.779760  485527 system_pods.go:89] "coredns-66bc5c9577-h7hk7" [9aa7532b-e3d5-400c-aed7-e9a650360cbb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 14:14:56.779770  485527 system_pods.go:89] "etcd-embed-certs-955646" [87bf4ca4-25bc-43b4-8570-b5eca3eede89] Running
	I1102 14:14:56.779781  485527 system_pods.go:89] "kindnet-fvxzq" [9738d225-9797-4e3e-abf3-6f04f63c0a9b] Running
	I1102 14:14:56.779786  485527 system_pods.go:89] "kube-apiserver-embed-certs-955646" [b391b75e-95ca-494b-b02f-7f3d76d7b971] Running
	I1102 14:14:56.779791  485527 system_pods.go:89] "kube-controller-manager-embed-certs-955646" [59f3ec73-aedb-4421-84ce-2772c47a3388] Running
	I1102 14:14:56.779796  485527 system_pods.go:89] "kube-proxy-hg44j" [b0fcfb9f-3864-406a-b0ec-c9c56864fcbd] Running
	I1102 14:14:56.779803  485527 system_pods.go:89] "kube-scheduler-embed-certs-955646" [4e908594-08ca-438d-8277-b65e9e87ef49] Running
	I1102 14:14:56.779810  485527 system_pods.go:89] "storage-provisioner" [3b29b058-9c13-4c95-9c23-b738213b2020] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 14:14:56.779834  485527 retry.go:31] will retry after 256.896165ms: missing components: kube-dns
	I1102 14:14:57.041370  485527 system_pods.go:86] 8 kube-system pods found
	I1102 14:14:57.041408  485527 system_pods.go:89] "coredns-66bc5c9577-h7hk7" [9aa7532b-e3d5-400c-aed7-e9a650360cbb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 14:14:57.041416  485527 system_pods.go:89] "etcd-embed-certs-955646" [87bf4ca4-25bc-43b4-8570-b5eca3eede89] Running
	I1102 14:14:57.041423  485527 system_pods.go:89] "kindnet-fvxzq" [9738d225-9797-4e3e-abf3-6f04f63c0a9b] Running
	I1102 14:14:57.041427  485527 system_pods.go:89] "kube-apiserver-embed-certs-955646" [b391b75e-95ca-494b-b02f-7f3d76d7b971] Running
	I1102 14:14:57.041432  485527 system_pods.go:89] "kube-controller-manager-embed-certs-955646" [59f3ec73-aedb-4421-84ce-2772c47a3388] Running
	I1102 14:14:57.041436  485527 system_pods.go:89] "kube-proxy-hg44j" [b0fcfb9f-3864-406a-b0ec-c9c56864fcbd] Running
	I1102 14:14:57.041441  485527 system_pods.go:89] "kube-scheduler-embed-certs-955646" [4e908594-08ca-438d-8277-b65e9e87ef49] Running
	I1102 14:14:57.041448  485527 system_pods.go:89] "storage-provisioner" [3b29b058-9c13-4c95-9c23-b738213b2020] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 14:14:57.041465  485527 retry.go:31] will retry after 331.852506ms: missing components: kube-dns
	I1102 14:14:57.378819  485527 system_pods.go:86] 8 kube-system pods found
	I1102 14:14:57.378853  485527 system_pods.go:89] "coredns-66bc5c9577-h7hk7" [9aa7532b-e3d5-400c-aed7-e9a650360cbb] Running
	I1102 14:14:57.378860  485527 system_pods.go:89] "etcd-embed-certs-955646" [87bf4ca4-25bc-43b4-8570-b5eca3eede89] Running
	I1102 14:14:57.378864  485527 system_pods.go:89] "kindnet-fvxzq" [9738d225-9797-4e3e-abf3-6f04f63c0a9b] Running
	I1102 14:14:57.378868  485527 system_pods.go:89] "kube-apiserver-embed-certs-955646" [b391b75e-95ca-494b-b02f-7f3d76d7b971] Running
	I1102 14:14:57.378873  485527 system_pods.go:89] "kube-controller-manager-embed-certs-955646" [59f3ec73-aedb-4421-84ce-2772c47a3388] Running
	I1102 14:14:57.378887  485527 system_pods.go:89] "kube-proxy-hg44j" [b0fcfb9f-3864-406a-b0ec-c9c56864fcbd] Running
	I1102 14:14:57.378892  485527 system_pods.go:89] "kube-scheduler-embed-certs-955646" [4e908594-08ca-438d-8277-b65e9e87ef49] Running
	I1102 14:14:57.378901  485527 system_pods.go:89] "storage-provisioner" [3b29b058-9c13-4c95-9c23-b738213b2020] Running
	I1102 14:14:57.378909  485527 system_pods.go:126] duration metric: took 603.223909ms to wait for k8s-apps to be running ...
	I1102 14:14:57.378921  485527 system_svc.go:44] waiting for kubelet service to be running ....
	I1102 14:14:57.378979  485527 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 14:14:57.396219  485527 system_svc.go:56] duration metric: took 17.279431ms WaitForService to wait for kubelet
	I1102 14:14:57.396250  485527 kubeadm.go:587] duration metric: took 42.401991901s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 14:14:57.396270  485527 node_conditions.go:102] verifying NodePressure condition ...
	I1102 14:14:57.400020  485527 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1102 14:14:57.400055  485527 node_conditions.go:123] node cpu capacity is 2
	I1102 14:14:57.400069  485527 node_conditions.go:105] duration metric: took 3.792848ms to run NodePressure ...
	I1102 14:14:57.400087  485527 start.go:242] waiting for startup goroutines ...
	I1102 14:14:57.400107  485527 start.go:247] waiting for cluster config update ...
	I1102 14:14:57.400119  485527 start.go:256] writing updated cluster config ...
	I1102 14:14:57.400430  485527 ssh_runner.go:195] Run: rm -f paused
	I1102 14:14:57.407063  485527 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 14:14:57.412563  485527 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-h7hk7" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:14:57.418096  485527 pod_ready.go:94] pod "coredns-66bc5c9577-h7hk7" is "Ready"
	I1102 14:14:57.418126  485527 pod_ready.go:86] duration metric: took 5.448567ms for pod "coredns-66bc5c9577-h7hk7" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:14:57.420470  485527 pod_ready.go:83] waiting for pod "etcd-embed-certs-955646" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:14:57.426330  485527 pod_ready.go:94] pod "etcd-embed-certs-955646" is "Ready"
	I1102 14:14:57.426357  485527 pod_ready.go:86] duration metric: took 5.856875ms for pod "etcd-embed-certs-955646" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:14:57.429130  485527 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-955646" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:14:57.434406  485527 pod_ready.go:94] pod "kube-apiserver-embed-certs-955646" is "Ready"
	I1102 14:14:57.434435  485527 pod_ready.go:86] duration metric: took 5.27652ms for pod "kube-apiserver-embed-certs-955646" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:14:57.437114  485527 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-955646" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:14:55.805087  490066 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1102 14:14:55.805385  490066 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1102 14:14:56.137348  490066 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1102 14:14:56.943659  490066 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1102 14:14:57.529389  490066 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1102 14:14:57.767986  490066 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1102 14:14:58.749753  490066 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1102 14:14:58.750554  490066 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1102 14:14:58.753298  490066 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1102 14:14:57.812848  485527 pod_ready.go:94] pod "kube-controller-manager-embed-certs-955646" is "Ready"
	I1102 14:14:57.812883  485527 pod_ready.go:86] duration metric: took 375.742361ms for pod "kube-controller-manager-embed-certs-955646" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:14:58.011863  485527 pod_ready.go:83] waiting for pod "kube-proxy-hg44j" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:14:58.411859  485527 pod_ready.go:94] pod "kube-proxy-hg44j" is "Ready"
	I1102 14:14:58.411933  485527 pod_ready.go:86] duration metric: took 399.990779ms for pod "kube-proxy-hg44j" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:14:58.612202  485527 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-955646" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:14:59.012154  485527 pod_ready.go:94] pod "kube-scheduler-embed-certs-955646" is "Ready"
	I1102 14:14:59.012180  485527 pod_ready.go:86] duration metric: took 399.904403ms for pod "kube-scheduler-embed-certs-955646" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:14:59.012193  485527 pod_ready.go:40] duration metric: took 1.605052258s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 14:14:59.077961  485527 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1102 14:14:59.081763  485527 out.go:179] * Done! kubectl is now configured to use "embed-certs-955646" cluster and "default" namespace by default
	I1102 14:14:58.756898  490066 out.go:252]   - Booting up control plane ...
	I1102 14:14:58.756995  490066 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1102 14:14:58.757072  490066 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1102 14:14:58.757138  490066 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1102 14:14:58.772910  490066 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1102 14:14:58.773228  490066 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1102 14:14:58.781907  490066 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1102 14:14:58.782215  490066 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1102 14:14:58.782441  490066 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1102 14:14:58.918811  490066 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1102 14:14:58.918940  490066 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1102 14:15:01.420102  490066 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.501430052s
	I1102 14:15:01.424081  490066 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1102 14:15:01.424184  490066 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1102 14:15:01.424588  490066 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1102 14:15:01.424677  490066 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1102 14:15:04.387050  490066 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.962531435s
	I1102 14:15:06.630049  490066 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.205976573s
	I1102 14:15:07.434825  490066 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.008426299s
	I1102 14:15:07.482108  490066 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1102 14:15:07.551044  490066 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1102 14:15:07.616049  490066 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1102 14:15:07.616287  490066 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-786183 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1102 14:15:07.642507  490066 kubeadm.go:319] [bootstrap-token] Using token: qwxcrv.qw2rowesb8wyjb0u
	
	
	==> CRI-O <==
	Nov 02 14:14:56 embed-certs-955646 crio[875]: time="2025-11-02T14:14:56.721096202Z" level=info msg="Created container 08bf0edebb4b96bd29fa5106f654544bc87e48c5e57ba38cfdc5a4d70556f4d8: kube-system/coredns-66bc5c9577-h7hk7/coredns" id=d1439f84-08b6-4e35-aaa4-80c6c7a290be name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:14:56 embed-certs-955646 crio[875]: time="2025-11-02T14:14:56.724577359Z" level=info msg="Starting container: 08bf0edebb4b96bd29fa5106f654544bc87e48c5e57ba38cfdc5a4d70556f4d8" id=e3b759bf-30d9-49c5-b5f7-4d9f71dec5d0 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 14:14:56 embed-certs-955646 crio[875]: time="2025-11-02T14:14:56.728638328Z" level=info msg="Started container" PID=1768 containerID=08bf0edebb4b96bd29fa5106f654544bc87e48c5e57ba38cfdc5a4d70556f4d8 description=kube-system/coredns-66bc5c9577-h7hk7/coredns id=e3b759bf-30d9-49c5-b5f7-4d9f71dec5d0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=45d8676008f0156e56ba53ed8b0568fab3c1a09851e3ed744712966221917793
	Nov 02 14:14:59 embed-certs-955646 crio[875]: time="2025-11-02T14:14:59.630188604Z" level=info msg="Running pod sandbox: default/busybox/POD" id=27aa1341-0c0b-4932-84fc-a177aa7068ff name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 14:14:59 embed-certs-955646 crio[875]: time="2025-11-02T14:14:59.630262237Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:14:59 embed-certs-955646 crio[875]: time="2025-11-02T14:14:59.649787482Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:98b0d338fd18b322e23169dbde31bb4b537cb35fa44678602c8df1fb9da4a5a7 UID:a6c0c8bf-d346-459c-bd70-f9b18f1f6a71 NetNS:/var/run/netns/a1b117ed-8b98-4b3d-8ca8-0ebafb2f9c5f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40004cd2f0}] Aliases:map[]}"
	Nov 02 14:14:59 embed-certs-955646 crio[875]: time="2025-11-02T14:14:59.649830518Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 02 14:14:59 embed-certs-955646 crio[875]: time="2025-11-02T14:14:59.674116373Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:98b0d338fd18b322e23169dbde31bb4b537cb35fa44678602c8df1fb9da4a5a7 UID:a6c0c8bf-d346-459c-bd70-f9b18f1f6a71 NetNS:/var/run/netns/a1b117ed-8b98-4b3d-8ca8-0ebafb2f9c5f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x40004cd2f0}] Aliases:map[]}"
	Nov 02 14:14:59 embed-certs-955646 crio[875]: time="2025-11-02T14:14:59.674284851Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 02 14:14:59 embed-certs-955646 crio[875]: time="2025-11-02T14:14:59.680824292Z" level=info msg="Ran pod sandbox 98b0d338fd18b322e23169dbde31bb4b537cb35fa44678602c8df1fb9da4a5a7 with infra container: default/busybox/POD" id=27aa1341-0c0b-4932-84fc-a177aa7068ff name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 14:14:59 embed-certs-955646 crio[875]: time="2025-11-02T14:14:59.681731632Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3d7b1f8b-1b34-41e4-8766-0d8838aa7b7f name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:14:59 embed-certs-955646 crio[875]: time="2025-11-02T14:14:59.681846177Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=3d7b1f8b-1b34-41e4-8766-0d8838aa7b7f name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:14:59 embed-certs-955646 crio[875]: time="2025-11-02T14:14:59.681915322Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=3d7b1f8b-1b34-41e4-8766-0d8838aa7b7f name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:14:59 embed-certs-955646 crio[875]: time="2025-11-02T14:14:59.684930726Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b1c7ce03-764a-4f6b-ad8f-5b8e26d39869 name=/runtime.v1.ImageService/PullImage
	Nov 02 14:14:59 embed-certs-955646 crio[875]: time="2025-11-02T14:14:59.688869249Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 02 14:15:02 embed-certs-955646 crio[875]: time="2025-11-02T14:15:02.023439362Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=b1c7ce03-764a-4f6b-ad8f-5b8e26d39869 name=/runtime.v1.ImageService/PullImage
	Nov 02 14:15:02 embed-certs-955646 crio[875]: time="2025-11-02T14:15:02.024595164Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9404a1d7-fd01-461d-ad4e-fe3e23d78123 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:15:02 embed-certs-955646 crio[875]: time="2025-11-02T14:15:02.028929809Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=391cf62e-565c-46b1-9b7e-7d9f3e0151e2 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:15:02 embed-certs-955646 crio[875]: time="2025-11-02T14:15:02.036320681Z" level=info msg="Creating container: default/busybox/busybox" id=eab7c53c-8a10-4bce-b672-99358363f177 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:15:02 embed-certs-955646 crio[875]: time="2025-11-02T14:15:02.036708738Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:15:02 embed-certs-955646 crio[875]: time="2025-11-02T14:15:02.044470582Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:15:02 embed-certs-955646 crio[875]: time="2025-11-02T14:15:02.045130297Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:15:02 embed-certs-955646 crio[875]: time="2025-11-02T14:15:02.068816927Z" level=info msg="Created container 61d2445b31829d68b8e72e9d5700bff365518116ad5f03e3bb072efd8755d336: default/busybox/busybox" id=eab7c53c-8a10-4bce-b672-99358363f177 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:15:02 embed-certs-955646 crio[875]: time="2025-11-02T14:15:02.070685523Z" level=info msg="Starting container: 61d2445b31829d68b8e72e9d5700bff365518116ad5f03e3bb072efd8755d336" id=0017b84a-6ad5-4b6a-b82f-ca416bc2d4d9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 14:15:02 embed-certs-955646 crio[875]: time="2025-11-02T14:15:02.07984743Z" level=info msg="Started container" PID=1830 containerID=61d2445b31829d68b8e72e9d5700bff365518116ad5f03e3bb072efd8755d336 description=default/busybox/busybox id=0017b84a-6ad5-4b6a-b82f-ca416bc2d4d9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=98b0d338fd18b322e23169dbde31bb4b537cb35fa44678602c8df1fb9da4a5a7
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	61d2445b31829       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   98b0d338fd18b       busybox                                      default
	08bf0edebb4b9       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago       Running             coredns                   0                   45d8676008f01       coredns-66bc5c9577-h7hk7                     kube-system
	e1ce27a08834f       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago       Running             storage-provisioner       0                   55790ece21861       storage-provisioner                          kube-system
	3cd38c2ed05fa       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      54 seconds ago       Running             kube-proxy                0                   713666c6fefac       kube-proxy-hg44j                             kube-system
	d5c7881bcc4ef       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      54 seconds ago       Running             kindnet-cni               0                   c646158eba436       kindnet-fvxzq                                kube-system
	6f3b602d85654       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   8a7355d2892e4       kube-scheduler-embed-certs-955646            kube-system
	5dc45a9accc73       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   b47ea7c36cece       kube-controller-manager-embed-certs-955646   kube-system
	006dfe02e3f85       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   6e0a2c9f996ce       kube-apiserver-embed-certs-955646            kube-system
	06b84b3935676       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   3220e92d28c1d       etcd-embed-certs-955646                      kube-system
	
	
	==> coredns [08bf0edebb4b96bd29fa5106f654544bc87e48c5e57ba38cfdc5a4d70556f4d8] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53334 - 14003 "HINFO IN 3236274219859109171.699469388865133290. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.019548433s
	
	
	==> describe nodes <==
	Name:               embed-certs-955646
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-955646
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a
	                    minikube.k8s.io/name=embed-certs-955646
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_02T14_14_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 02 Nov 2025 14:14:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-955646
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 02 Nov 2025 14:15:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 02 Nov 2025 14:14:56 +0000   Sun, 02 Nov 2025 14:14:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 02 Nov 2025 14:14:56 +0000   Sun, 02 Nov 2025 14:14:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 02 Nov 2025 14:14:56 +0000   Sun, 02 Nov 2025 14:14:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 02 Nov 2025 14:14:56 +0000   Sun, 02 Nov 2025 14:14:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-955646
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                b99c9ad5-2cef-4b58-868b-a11cc5355016
	  Boot ID:                    4c303cf4-c0b6-4c0d-aa1a-d7509feb15e7
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-h7hk7                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     55s
	  kube-system                 etcd-embed-certs-955646                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         59s
	  kube-system                 kindnet-fvxzq                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-embed-certs-955646             250m (12%)    0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-controller-manager-embed-certs-955646    200m (10%)    0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-proxy-hg44j                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-embed-certs-955646             100m (5%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 53s                kube-proxy       
	  Normal   Starting                 68s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 68s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  67s (x8 over 67s)  kubelet          Node embed-certs-955646 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    67s (x8 over 67s)  kubelet          Node embed-certs-955646 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     67s (x8 over 67s)  kubelet          Node embed-certs-955646 status is now: NodeHasSufficientPID
	  Normal   Starting                 60s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  59s                kubelet          Node embed-certs-955646 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s                kubelet          Node embed-certs-955646 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s                kubelet          Node embed-certs-955646 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                node-controller  Node embed-certs-955646 event: Registered Node embed-certs-955646 in Controller
	  Normal   NodeReady                13s                kubelet          Node embed-certs-955646 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 2 13:54] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:55] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:56] overlayfs: idmapped layers are currently not supported
	[  +3.515963] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:57] overlayfs: idmapped layers are currently not supported
	[ +24.836033] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:58] overlayfs: idmapped layers are currently not supported
	[ +23.362553] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:59] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:01] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:02] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:03] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:04] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:06] overlayfs: idmapped layers are currently not supported
	[ +50.469589] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 2 14:07] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:08] overlayfs: idmapped layers are currently not supported
	[ +11.089512] overlayfs: idmapped layers are currently not supported
	[ +33.821233] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:09] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:10] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:11] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:13] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:14] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:15] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [06b84b3935676cfe3a519dc19dc502fc6eb4b14b680e69d26f2ead87687c3170] <==
	{"level":"warn","ts":"2025-11-02T14:14:05.088037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:14:05.126686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:14:05.154777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:14:05.182298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:14:05.210746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:14:05.277126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:14:05.296029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:14:05.312571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:14:05.337538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:14:05.361308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:14:05.401657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:14:05.417336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:14:05.469302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:14:05.485330Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:14:05.507345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:14:05.527034Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:14:05.536308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:14:05.553309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:14:05.570153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:14:05.616005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:14:05.620700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:14:05.642656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:14:05.660056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:14:05.680698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:14:05.785698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37016","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:15:09 up  2:57,  0 user,  load average: 3.88, 3.43, 2.93
	Linux embed-certs-955646 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d5c7881bcc4efbfcbd3319b6ba47c6f0e7e65f555576e7c49d2f4030a7bd83ea] <==
	I1102 14:14:15.417320       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1102 14:14:15.417696       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1102 14:14:15.417827       1 main.go:148] setting mtu 1500 for CNI 
	I1102 14:14:15.418097       1 main.go:178] kindnetd IP family: "ipv4"
	I1102 14:14:15.418120       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-02T14:14:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1102 14:14:15.625363       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1102 14:14:15.625381       1 controller.go:381] "Waiting for informer caches to sync"
	I1102 14:14:15.625389       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1102 14:14:15.631795       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1102 14:14:45.626130       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1102 14:14:45.626245       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1102 14:14:45.631641       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1102 14:14:45.631788       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1102 14:14:47.025788       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1102 14:14:47.025821       1 metrics.go:72] Registering metrics
	I1102 14:14:47.025874       1 controller.go:711] "Syncing nftables rules"
	I1102 14:14:55.632348       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1102 14:14:55.632409       1 main.go:301] handling current node
	I1102 14:15:05.626672       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1102 14:15:05.626714       1 main.go:301] handling current node
	
	
	==> kube-apiserver [006dfe02e3f85f13f56062b19b6041ac2db3ad206bbc55bebe70b9f7c0d8c111] <==
	I1102 14:14:06.945403       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1102 14:14:06.945975       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1102 14:14:06.946344       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1102 14:14:06.995762       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 14:14:07.003318       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1102 14:14:07.024816       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 14:14:07.027184       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1102 14:14:07.135538       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1102 14:14:07.551252       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1102 14:14:07.562756       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1102 14:14:07.562861       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1102 14:14:08.386298       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1102 14:14:08.451438       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1102 14:14:08.564586       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1102 14:14:08.574157       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1102 14:14:08.575399       1 controller.go:667] quota admission added evaluator for: endpoints
	I1102 14:14:08.588760       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1102 14:14:08.713186       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1102 14:14:09.850297       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1102 14:14:09.878885       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1102 14:14:09.895891       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1102 14:14:14.621632       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 14:14:14.626305       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 14:14:14.667202       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1102 14:14:14.769225       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [5dc45a9accc733ca67cd62adf158a96ad4af17a88db539c2ed1248813bd1206b] <==
	I1102 14:14:13.810752       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1102 14:14:13.813182       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1102 14:14:13.817270       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1102 14:14:13.817780       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1102 14:14:13.817885       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1102 14:14:13.818065       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1102 14:14:13.818228       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1102 14:14:13.818259       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1102 14:14:13.818273       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1102 14:14:13.818577       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1102 14:14:13.818597       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1102 14:14:13.819024       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1102 14:14:13.819084       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1102 14:14:13.819217       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1102 14:14:13.819257       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1102 14:14:13.819305       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1102 14:14:13.821799       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1102 14:14:13.826121       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1102 14:14:13.839343       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1102 14:14:13.839969       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 14:14:13.853314       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-955646" podCIDRs=["10.244.0.0/24"]
	I1102 14:14:13.860811       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1102 14:14:13.863321       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1102 14:14:13.870155       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 14:14:58.776152       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [3cd38c2ed05fabe707045e6c835c73da28c29397094dd972ee9aa7e12b3bee5d] <==
	I1102 14:14:15.364069       1 server_linux.go:53] "Using iptables proxy"
	I1102 14:14:15.464501       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1102 14:14:15.565246       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1102 14:14:15.565290       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1102 14:14:15.565389       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1102 14:14:15.601192       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 14:14:15.601253       1 server_linux.go:132] "Using iptables Proxier"
	I1102 14:14:15.609564       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1102 14:14:15.609894       1 server.go:527] "Version info" version="v1.34.1"
	I1102 14:14:15.609912       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 14:14:15.611852       1 config.go:200] "Starting service config controller"
	I1102 14:14:15.611871       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1102 14:14:15.616714       1 config.go:106] "Starting endpoint slice config controller"
	I1102 14:14:15.616738       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1102 14:14:15.616758       1 config.go:403] "Starting serviceCIDR config controller"
	I1102 14:14:15.616762       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1102 14:14:15.617340       1 config.go:309] "Starting node config controller"
	I1102 14:14:15.617377       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1102 14:14:15.617386       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1102 14:14:15.713165       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1102 14:14:15.717506       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1102 14:14:15.717536       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [6f3b602d8565494a9c728b7dbfc8049e0df59fcd463eb84c21f0d17c2850c746] <==
	I1102 14:14:07.234094       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 14:14:07.236868       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1102 14:14:07.237008       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1102 14:14:07.238051       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1102 14:14:07.241468       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1102 14:14:07.241554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1102 14:14:07.251703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1102 14:14:07.251804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1102 14:14:07.251868       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1102 14:14:07.251928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1102 14:14:07.253076       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1102 14:14:07.253180       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1102 14:14:07.253321       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1102 14:14:07.253385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1102 14:14:07.253433       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1102 14:14:07.253477       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1102 14:14:07.253599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1102 14:14:07.253717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1102 14:14:07.253798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1102 14:14:07.253851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1102 14:14:07.253958       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1102 14:14:07.254025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1102 14:14:08.069066       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1102 14:14:08.113779       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1102 14:14:10.135186       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 02 14:14:13 embed-certs-955646 kubelet[1346]: I1102 14:14:13.939467    1346 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 02 14:14:13 embed-certs-955646 kubelet[1346]: I1102 14:14:13.940340    1346 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 02 14:14:14 embed-certs-955646 kubelet[1346]: I1102 14:14:14.771762    1346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9738d225-9797-4e3e-abf3-6f04f63c0a9b-lib-modules\") pod \"kindnet-fvxzq\" (UID: \"9738d225-9797-4e3e-abf3-6f04f63c0a9b\") " pod="kube-system/kindnet-fvxzq"
	Nov 02 14:14:14 embed-certs-955646 kubelet[1346]: I1102 14:14:14.771826    1346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqn6r\" (UniqueName: \"kubernetes.io/projected/b0fcfb9f-3864-406a-b0ec-c9c56864fcbd-kube-api-access-pqn6r\") pod \"kube-proxy-hg44j\" (UID: \"b0fcfb9f-3864-406a-b0ec-c9c56864fcbd\") " pod="kube-system/kube-proxy-hg44j"
	Nov 02 14:14:14 embed-certs-955646 kubelet[1346]: I1102 14:14:14.771867    1346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/9738d225-9797-4e3e-abf3-6f04f63c0a9b-cni-cfg\") pod \"kindnet-fvxzq\" (UID: \"9738d225-9797-4e3e-abf3-6f04f63c0a9b\") " pod="kube-system/kindnet-fvxzq"
	Nov 02 14:14:14 embed-certs-955646 kubelet[1346]: I1102 14:14:14.771888    1346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b0fcfb9f-3864-406a-b0ec-c9c56864fcbd-kube-proxy\") pod \"kube-proxy-hg44j\" (UID: \"b0fcfb9f-3864-406a-b0ec-c9c56864fcbd\") " pod="kube-system/kube-proxy-hg44j"
	Nov 02 14:14:14 embed-certs-955646 kubelet[1346]: I1102 14:14:14.771910    1346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b0fcfb9f-3864-406a-b0ec-c9c56864fcbd-xtables-lock\") pod \"kube-proxy-hg44j\" (UID: \"b0fcfb9f-3864-406a-b0ec-c9c56864fcbd\") " pod="kube-system/kube-proxy-hg44j"
	Nov 02 14:14:14 embed-certs-955646 kubelet[1346]: I1102 14:14:14.772020    1346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b0fcfb9f-3864-406a-b0ec-c9c56864fcbd-lib-modules\") pod \"kube-proxy-hg44j\" (UID: \"b0fcfb9f-3864-406a-b0ec-c9c56864fcbd\") " pod="kube-system/kube-proxy-hg44j"
	Nov 02 14:14:14 embed-certs-955646 kubelet[1346]: I1102 14:14:14.772074    1346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9738d225-9797-4e3e-abf3-6f04f63c0a9b-xtables-lock\") pod \"kindnet-fvxzq\" (UID: \"9738d225-9797-4e3e-abf3-6f04f63c0a9b\") " pod="kube-system/kindnet-fvxzq"
	Nov 02 14:14:14 embed-certs-955646 kubelet[1346]: I1102 14:14:14.772149    1346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvg5j\" (UniqueName: \"kubernetes.io/projected/9738d225-9797-4e3e-abf3-6f04f63c0a9b-kube-api-access-cvg5j\") pod \"kindnet-fvxzq\" (UID: \"9738d225-9797-4e3e-abf3-6f04f63c0a9b\") " pod="kube-system/kindnet-fvxzq"
	Nov 02 14:14:14 embed-certs-955646 kubelet[1346]: I1102 14:14:14.892278    1346 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 02 14:14:15 embed-certs-955646 kubelet[1346]: W1102 14:14:15.085080    1346 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/30c758ef671a78927180fb97bde8b1a9a094e365398db33e109ee628c96e6553/crio-c646158eba436a00de4b4d7e946abca6dcacd120728c539f4416c36350d3fc29 WatchSource:0}: Error finding container c646158eba436a00de4b4d7e946abca6dcacd120728c539f4416c36350d3fc29: Status 404 returned error can't find the container with id c646158eba436a00de4b4d7e946abca6dcacd120728c539f4416c36350d3fc29
	Nov 02 14:14:15 embed-certs-955646 kubelet[1346]: W1102 14:14:15.096033    1346 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/30c758ef671a78927180fb97bde8b1a9a094e365398db33e109ee628c96e6553/crio-713666c6fefaca3598b48ef6e4006c1125b653057ea9f279ce52b3e0f14020b8 WatchSource:0}: Error finding container 713666c6fefaca3598b48ef6e4006c1125b653057ea9f279ce52b3e0f14020b8: Status 404 returned error can't find the container with id 713666c6fefaca3598b48ef6e4006c1125b653057ea9f279ce52b3e0f14020b8
	Nov 02 14:14:16 embed-certs-955646 kubelet[1346]: I1102 14:14:16.044331    1346 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hg44j" podStartSLOduration=2.044305508 podStartE2EDuration="2.044305508s" podCreationTimestamp="2025-11-02 14:14:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 14:14:16.015279937 +0000 UTC m=+6.346312769" watchObservedRunningTime="2025-11-02 14:14:16.044305508 +0000 UTC m=+6.375338299"
	Nov 02 14:14:16 embed-certs-955646 kubelet[1346]: I1102 14:14:16.044638    1346 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-fvxzq" podStartSLOduration=2.04462716 podStartE2EDuration="2.04462716s" podCreationTimestamp="2025-11-02 14:14:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 14:14:16.038945058 +0000 UTC m=+6.369977857" watchObservedRunningTime="2025-11-02 14:14:16.04462716 +0000 UTC m=+6.375659975"
	Nov 02 14:14:56 embed-certs-955646 kubelet[1346]: I1102 14:14:56.169246    1346 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 02 14:14:56 embed-certs-955646 kubelet[1346]: I1102 14:14:56.316529    1346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3b29b058-9c13-4c95-9c23-b738213b2020-tmp\") pod \"storage-provisioner\" (UID: \"3b29b058-9c13-4c95-9c23-b738213b2020\") " pod="kube-system/storage-provisioner"
	Nov 02 14:14:56 embed-certs-955646 kubelet[1346]: I1102 14:14:56.316734    1346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx5zz\" (UniqueName: \"kubernetes.io/projected/3b29b058-9c13-4c95-9c23-b738213b2020-kube-api-access-cx5zz\") pod \"storage-provisioner\" (UID: \"3b29b058-9c13-4c95-9c23-b738213b2020\") " pod="kube-system/storage-provisioner"
	Nov 02 14:14:56 embed-certs-955646 kubelet[1346]: I1102 14:14:56.316855    1346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9aa7532b-e3d5-400c-aed7-e9a650360cbb-config-volume\") pod \"coredns-66bc5c9577-h7hk7\" (UID: \"9aa7532b-e3d5-400c-aed7-e9a650360cbb\") " pod="kube-system/coredns-66bc5c9577-h7hk7"
	Nov 02 14:14:56 embed-certs-955646 kubelet[1346]: I1102 14:14:56.316956    1346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vxqx\" (UniqueName: \"kubernetes.io/projected/9aa7532b-e3d5-400c-aed7-e9a650360cbb-kube-api-access-2vxqx\") pod \"coredns-66bc5c9577-h7hk7\" (UID: \"9aa7532b-e3d5-400c-aed7-e9a650360cbb\") " pod="kube-system/coredns-66bc5c9577-h7hk7"
	Nov 02 14:14:56 embed-certs-955646 kubelet[1346]: W1102 14:14:56.573037    1346 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/30c758ef671a78927180fb97bde8b1a9a094e365398db33e109ee628c96e6553/crio-55790ece218616e0647c20e49aaa16f09677180d4df805762d88ffeb2bf46d2b WatchSource:0}: Error finding container 55790ece218616e0647c20e49aaa16f09677180d4df805762d88ffeb2bf46d2b: Status 404 returned error can't find the container with id 55790ece218616e0647c20e49aaa16f09677180d4df805762d88ffeb2bf46d2b
	Nov 02 14:14:57 embed-certs-955646 kubelet[1346]: I1102 14:14:57.141815    1346 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.141796681 podStartE2EDuration="41.141796681s" podCreationTimestamp="2025-11-02 14:14:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 14:14:57.138340501 +0000 UTC m=+47.469373292" watchObservedRunningTime="2025-11-02 14:14:57.141796681 +0000 UTC m=+47.472829480"
	Nov 02 14:14:57 embed-certs-955646 kubelet[1346]: I1102 14:14:57.163272    1346 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-h7hk7" podStartSLOduration=43.163252422 podStartE2EDuration="43.163252422s" podCreationTimestamp="2025-11-02 14:14:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 14:14:57.155613524 +0000 UTC m=+47.486646323" watchObservedRunningTime="2025-11-02 14:14:57.163252422 +0000 UTC m=+47.494285221"
	Nov 02 14:14:59 embed-certs-955646 kubelet[1346]: I1102 14:14:59.436943    1346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75h4r\" (UniqueName: \"kubernetes.io/projected/a6c0c8bf-d346-459c-bd70-f9b18f1f6a71-kube-api-access-75h4r\") pod \"busybox\" (UID: \"a6c0c8bf-d346-459c-bd70-f9b18f1f6a71\") " pod="default/busybox"
	Nov 02 14:14:59 embed-certs-955646 kubelet[1346]: W1102 14:14:59.679107    1346 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/30c758ef671a78927180fb97bde8b1a9a094e365398db33e109ee628c96e6553/crio-98b0d338fd18b322e23169dbde31bb4b537cb35fa44678602c8df1fb9da4a5a7 WatchSource:0}: Error finding container 98b0d338fd18b322e23169dbde31bb4b537cb35fa44678602c8df1fb9da4a5a7: Status 404 returned error can't find the container with id 98b0d338fd18b322e23169dbde31bb4b537cb35fa44678602c8df1fb9da4a5a7
	
	
	==> storage-provisioner [e1ce27a08834fa018b8a67446a4a510a051778aa92bc00a7bae084cbebbb1d78] <==
	I1102 14:14:56.719784       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1102 14:14:56.829663       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1102 14:14:56.862026       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1102 14:14:56.869487       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:14:56.878993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1102 14:14:56.879258       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1102 14:14:56.881516       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-955646_6e0a19ee-dc13-4755-953e-f971528674f4!
	I1102 14:14:56.884533       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"192779df-44b5-4e02-8171-660f368cbc29", APIVersion:"v1", ResourceVersion:"464", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-955646_6e0a19ee-dc13-4755-953e-f971528674f4 became leader
	W1102 14:14:56.889917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:14:56.893588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1102 14:14:56.982232       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-955646_6e0a19ee-dc13-4755-953e-f971528674f4!
	W1102 14:14:58.902503       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:14:58.913208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:15:00.917416       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:15:00.924282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:15:02.927652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:15:02.933054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:15:04.936756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:15:04.942443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:15:06.946441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:15:06.953394       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:15:08.957143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:15:08.963536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-955646 -n embed-certs-955646
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-955646 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (3.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-786183 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-786183 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (269.282768ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T14:16:07Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-786183 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-786183 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-786183 describe deploy/metrics-server -n kube-system: exit status 1 (79.676542ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-786183 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-786183
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-786183:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cf96e33bc393b6a3d6d55863f66411b90a67bf689beea00fe018be1f7c4b996e",
	        "Created": "2025-11-02T14:14:40.799097955Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 490455,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-02T14:14:40.885369553Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/cf96e33bc393b6a3d6d55863f66411b90a67bf689beea00fe018be1f7c4b996e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cf96e33bc393b6a3d6d55863f66411b90a67bf689beea00fe018be1f7c4b996e/hostname",
	        "HostsPath": "/var/lib/docker/containers/cf96e33bc393b6a3d6d55863f66411b90a67bf689beea00fe018be1f7c4b996e/hosts",
	        "LogPath": "/var/lib/docker/containers/cf96e33bc393b6a3d6d55863f66411b90a67bf689beea00fe018be1f7c4b996e/cf96e33bc393b6a3d6d55863f66411b90a67bf689beea00fe018be1f7c4b996e-json.log",
	        "Name": "/default-k8s-diff-port-786183",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-786183:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-786183",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cf96e33bc393b6a3d6d55863f66411b90a67bf689beea00fe018be1f7c4b996e",
	                "LowerDir": "/var/lib/docker/overlay2/0cc4afb15e6b9077b7bd9ca2486b4ac42578c2a830d4d2dac09a5efd27fb8673-init/diff:/var/lib/docker/overlay2/53058afe6acd7391f639f551e07f446f6a39a991b699aea18507cb21a1652e97/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0cc4afb15e6b9077b7bd9ca2486b4ac42578c2a830d4d2dac09a5efd27fb8673/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0cc4afb15e6b9077b7bd9ca2486b4ac42578c2a830d4d2dac09a5efd27fb8673/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0cc4afb15e6b9077b7bd9ca2486b4ac42578c2a830d4d2dac09a5efd27fb8673/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-786183",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-786183/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-786183",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-786183",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-786183",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a3efaa8e0dd33a2c02ec56d185f6d749b5262103b8b5fd0253b1a59ebf332751",
	            "SandboxKey": "/var/run/docker/netns/a3efaa8e0dd3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-786183": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "52:03:f5:c8:4d:23",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "eb820b490718d17822d92cba10b61f1c2ec01866da1013536864f8ac5224c699",
	                    "EndpointID": "55c644c1ea7b5e86002b4ad94a1423550b2e7f8fa7e63e511210efd90599cd84",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-786183",
	                        "cf96e33bc393"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-786183 -n default-k8s-diff-port-786183
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-786183 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-786183 logs -n 25: (1.409630474s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p old-k8s-version-873713 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-873713       │ jenkins │ v1.37.0 │ 02 Nov 25 14:10 UTC │ 02 Nov 25 14:10 UTC │
	│ start   │ -p old-k8s-version-873713 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-873713       │ jenkins │ v1.37.0 │ 02 Nov 25 14:10 UTC │ 02 Nov 25 14:11 UTC │
	│ start   │ -p cert-expiration-114321 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-114321       │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │ 02 Nov 25 14:13 UTC │
	│ image   │ old-k8s-version-873713 image list --format=json                                                                                                                                                                                               │ old-k8s-version-873713       │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │ 02 Nov 25 14:11 UTC │
	│ pause   │ -p old-k8s-version-873713 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-873713       │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │                     │
	│ delete  │ -p old-k8s-version-873713                                                                                                                                                                                                                     │ old-k8s-version-873713       │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │ 02 Nov 25 14:11 UTC │
	│ delete  │ -p old-k8s-version-873713                                                                                                                                                                                                                     │ old-k8s-version-873713       │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │ 02 Nov 25 14:11 UTC │
	│ start   │ -p no-preload-150469 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │ 02 Nov 25 14:12 UTC │
	│ addons  │ enable metrics-server -p no-preload-150469 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │                     │
	│ stop    │ -p no-preload-150469 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │ 02 Nov 25 14:13 UTC │
	│ addons  │ enable dashboard -p no-preload-150469 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │ 02 Nov 25 14:13 UTC │
	│ start   │ -p no-preload-150469 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │ 02 Nov 25 14:14 UTC │
	│ delete  │ -p cert-expiration-114321                                                                                                                                                                                                                     │ cert-expiration-114321       │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │ 02 Nov 25 14:13 UTC │
	│ start   │ -p embed-certs-955646 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │ 02 Nov 25 14:14 UTC │
	│ image   │ no-preload-150469 image list --format=json                                                                                                                                                                                                    │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │ 02 Nov 25 14:14 UTC │
	│ pause   │ -p no-preload-150469 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │                     │
	│ delete  │ -p no-preload-150469                                                                                                                                                                                                                          │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │ 02 Nov 25 14:14 UTC │
	│ delete  │ -p no-preload-150469                                                                                                                                                                                                                          │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │ 02 Nov 25 14:14 UTC │
	│ delete  │ -p disable-driver-mounts-720030                                                                                                                                                                                                               │ disable-driver-mounts-720030 │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │ 02 Nov 25 14:14 UTC │
	│ start   │ -p default-k8s-diff-port-786183 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │ 02 Nov 25 14:15 UTC │
	│ addons  │ enable metrics-server -p embed-certs-955646 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:15 UTC │                     │
	│ stop    │ -p embed-certs-955646 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:15 UTC │ 02 Nov 25 14:15 UTC │
	│ addons  │ enable dashboard -p embed-certs-955646 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:15 UTC │ 02 Nov 25 14:15 UTC │
	│ start   │ -p embed-certs-955646 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:15 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-786183 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 14:15:22
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 14:15:22.919924  493385 out.go:360] Setting OutFile to fd 1 ...
	I1102 14:15:22.920108  493385 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:15:22.920120  493385 out.go:374] Setting ErrFile to fd 2...
	I1102 14:15:22.920127  493385 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:15:22.920450  493385 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 14:15:22.920880  493385 out.go:368] Setting JSON to false
	I1102 14:15:22.921990  493385 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10675,"bootTime":1762082248,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1102 14:15:22.922058  493385 start.go:143] virtualization:  
	I1102 14:15:22.925027  493385 out.go:179] * [embed-certs-955646] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1102 14:15:22.928833  493385 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 14:15:22.928938  493385 notify.go:221] Checking for updates...
	I1102 14:15:22.934861  493385 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 14:15:22.937877  493385 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-293314/kubeconfig
	I1102 14:15:22.940766  493385 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-293314/.minikube
	I1102 14:15:22.943669  493385 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1102 14:15:22.946726  493385 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 14:15:22.950286  493385 config.go:182] Loaded profile config "embed-certs-955646": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:15:22.951011  493385 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 14:15:22.981643  493385 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1102 14:15:22.981745  493385 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:15:23.049788  493385 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-02 14:15:23.040159297 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:15:23.049897  493385 docker.go:319] overlay module found
	I1102 14:15:23.052935  493385 out.go:179] * Using the docker driver based on existing profile
	I1102 14:15:23.055700  493385 start.go:309] selected driver: docker
	I1102 14:15:23.055721  493385 start.go:930] validating driver "docker" against &{Name:embed-certs-955646 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-955646 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 14:15:23.055831  493385 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 14:15:23.056557  493385 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:15:23.107463  493385 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-02 14:15:23.098016302 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:15:23.107792  493385 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 14:15:23.107828  493385 cni.go:84] Creating CNI manager for ""
	I1102 14:15:23.107887  493385 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 14:15:23.107925  493385 start.go:353] cluster config:
	{Name:embed-certs-955646 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-955646 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 14:15:23.112805  493385 out.go:179] * Starting "embed-certs-955646" primary control-plane node in "embed-certs-955646" cluster
	I1102 14:15:23.115608  493385 cache.go:124] Beginning downloading kic base image for docker with crio
	I1102 14:15:23.118465  493385 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1102 14:15:23.121216  493385 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 14:15:23.121268  493385 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1102 14:15:23.121281  493385 cache.go:59] Caching tarball of preloaded images
	I1102 14:15:23.121313  493385 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1102 14:15:23.121365  493385 preload.go:233] Found /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1102 14:15:23.121375  493385 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1102 14:15:23.121483  493385 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/config.json ...
	I1102 14:15:23.140255  493385 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1102 14:15:23.140281  493385 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1102 14:15:23.140293  493385 cache.go:233] Successfully downloaded all kic artifacts
	I1102 14:15:23.140315  493385 start.go:360] acquireMachinesLock for embed-certs-955646: {Name:mke26bb2e28d5dc8d577d151206240e9d92b1828 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 14:15:23.140372  493385 start.go:364] duration metric: took 34.61µs to acquireMachinesLock for "embed-certs-955646"
	I1102 14:15:23.140402  493385 start.go:96] Skipping create...Using existing machine configuration
	I1102 14:15:23.140410  493385 fix.go:54] fixHost starting: 
	I1102 14:15:23.140673  493385 cli_runner.go:164] Run: docker container inspect embed-certs-955646 --format={{.State.Status}}
	I1102 14:15:23.157669  493385 fix.go:112] recreateIfNeeded on embed-certs-955646: state=Stopped err=<nil>
	W1102 14:15:23.157700  493385 fix.go:138] unexpected machine state, will restart: <nil>
	W1102 14:15:20.960413  490066 node_ready.go:57] node "default-k8s-diff-port-786183" has "Ready":"False" status (will retry)
	W1102 14:15:22.960588  490066 node_ready.go:57] node "default-k8s-diff-port-786183" has "Ready":"False" status (will retry)
	I1102 14:15:23.160836  493385 out.go:252] * Restarting existing docker container for "embed-certs-955646" ...
	I1102 14:15:23.160914  493385 cli_runner.go:164] Run: docker start embed-certs-955646
	I1102 14:15:23.419985  493385 cli_runner.go:164] Run: docker container inspect embed-certs-955646 --format={{.State.Status}}
	I1102 14:15:23.447028  493385 kic.go:430] container "embed-certs-955646" state is running.
	I1102 14:15:23.447577  493385 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-955646
	I1102 14:15:23.477393  493385 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/config.json ...
	I1102 14:15:23.477628  493385 machine.go:94] provisionDockerMachine start ...
	I1102 14:15:23.477857  493385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-955646
	I1102 14:15:23.504642  493385 main.go:143] libmachine: Using SSH client type: native
	I1102 14:15:23.504976  493385 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33451 <nil> <nil>}
	I1102 14:15:23.504987  493385 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 14:15:23.505599  493385 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43356->127.0.0.1:33451: read: connection reset by peer
	I1102 14:15:26.658438  493385 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-955646
	
	I1102 14:15:26.658464  493385 ubuntu.go:182] provisioning hostname "embed-certs-955646"
	I1102 14:15:26.658537  493385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-955646
	I1102 14:15:26.676583  493385 main.go:143] libmachine: Using SSH client type: native
	I1102 14:15:26.676900  493385 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33451 <nil> <nil>}
	I1102 14:15:26.676919  493385 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-955646 && echo "embed-certs-955646" | sudo tee /etc/hostname
	I1102 14:15:26.836025  493385 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-955646
	
	I1102 14:15:26.836114  493385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-955646
	I1102 14:15:26.853598  493385 main.go:143] libmachine: Using SSH client type: native
	I1102 14:15:26.853919  493385 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33451 <nil> <nil>}
	I1102 14:15:26.853943  493385 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-955646' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-955646/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-955646' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 14:15:27.008480  493385 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 14:15:27.008514  493385 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-293314/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-293314/.minikube}
	I1102 14:15:27.008549  493385 ubuntu.go:190] setting up certificates
	I1102 14:15:27.008558  493385 provision.go:84] configureAuth start
	I1102 14:15:27.008640  493385 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-955646
	I1102 14:15:27.027535  493385 provision.go:143] copyHostCerts
	I1102 14:15:27.027608  493385 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem, removing ...
	I1102 14:15:27.027631  493385 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem
	I1102 14:15:27.027720  493385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem (1082 bytes)
	I1102 14:15:27.027832  493385 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem, removing ...
	I1102 14:15:27.027842  493385 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem
	I1102 14:15:27.027871  493385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem (1123 bytes)
	I1102 14:15:27.027939  493385 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem, removing ...
	I1102 14:15:27.027950  493385 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem
	I1102 14:15:27.027982  493385 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem (1675 bytes)
	I1102 14:15:27.028050  493385 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem org=jenkins.embed-certs-955646 san=[127.0.0.1 192.168.85.2 embed-certs-955646 localhost minikube]
	I1102 14:15:27.154011  493385 provision.go:177] copyRemoteCerts
	I1102 14:15:27.154090  493385 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 14:15:27.154136  493385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-955646
	I1102 14:15:27.173072  493385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/embed-certs-955646/id_rsa Username:docker}
	I1102 14:15:27.278324  493385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1102 14:15:27.296848  493385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1102 14:15:27.315596  493385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1102 14:15:27.334001  493385 provision.go:87] duration metric: took 325.419983ms to configureAuth
	I1102 14:15:27.334031  493385 ubuntu.go:206] setting minikube options for container-runtime
	I1102 14:15:27.334226  493385 config.go:182] Loaded profile config "embed-certs-955646": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:15:27.334343  493385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-955646
	I1102 14:15:27.351901  493385 main.go:143] libmachine: Using SSH client type: native
	I1102 14:15:27.352214  493385 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33451 <nil> <nil>}
	I1102 14:15:27.352232  493385 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1102 14:15:27.680438  493385 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1102 14:15:27.680463  493385 machine.go:97] duration metric: took 4.202734722s to provisionDockerMachine
	I1102 14:15:27.680474  493385 start.go:293] postStartSetup for "embed-certs-955646" (driver="docker")
	I1102 14:15:27.680486  493385 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1102 14:15:27.680553  493385 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1102 14:15:27.680599  493385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-955646
	I1102 14:15:27.703170  493385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/embed-certs-955646/id_rsa Username:docker}
	I1102 14:15:27.811242  493385 ssh_runner.go:195] Run: cat /etc/os-release
	I1102 14:15:27.814296  493385 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1102 14:15:27.814377  493385 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1102 14:15:27.814395  493385 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-293314/.minikube/addons for local assets ...
	I1102 14:15:27.814447  493385 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-293314/.minikube/files for local assets ...
	I1102 14:15:27.814523  493385 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem -> 2951742.pem in /etc/ssl/certs
	I1102 14:15:27.814659  493385 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1102 14:15:27.822071  493385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem --> /etc/ssl/certs/2951742.pem (1708 bytes)
	I1102 14:15:27.838779  493385 start.go:296] duration metric: took 158.287565ms for postStartSetup
	I1102 14:15:27.838856  493385 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 14:15:27.838896  493385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-955646
	I1102 14:15:27.861315  493385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/embed-certs-955646/id_rsa Username:docker}
	I1102 14:15:27.964341  493385 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1102 14:15:27.969498  493385 fix.go:56] duration metric: took 4.829075479s for fixHost
	I1102 14:15:27.969527  493385 start.go:83] releasing machines lock for "embed-certs-955646", held for 4.829141744s
	I1102 14:15:27.969604  493385 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-955646
	I1102 14:15:27.986359  493385 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem (1338 bytes)
	W1102 14:15:27.986423  493385 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174_empty.pem, impossibly tiny 0 bytes
	I1102 14:15:27.986432  493385 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem (1679 bytes)
	I1102 14:15:27.986455  493385 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem (1082 bytes)
	I1102 14:15:27.986486  493385 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem (1123 bytes)
	I1102 14:15:27.986512  493385 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem (1675 bytes)
	I1102 14:15:27.986560  493385 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem (1708 bytes)
	I1102 14:15:27.986685  493385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem --> /usr/share/ca-certificates/295174.pem (1338 bytes)
	I1102 14:15:27.986747  493385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-955646
	I1102 14:15:28.006001  493385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/embed-certs-955646/id_rsa Username:docker}
	I1102 14:15:28.122140  493385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem --> /usr/share/ca-certificates/2951742.pem (1708 bytes)
	I1102 14:15:28.140669  493385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 14:15:28.158920  493385 ssh_runner.go:195] Run: openssl version
	I1102 14:15:28.165463  493385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 14:15:28.174419  493385 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:15:28.178182  493385 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 13:13 /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:15:28.178251  493385 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:15:28.219857  493385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 14:15:28.227730  493385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295174.pem && ln -fs /usr/share/ca-certificates/295174.pem /etc/ssl/certs/295174.pem"
	I1102 14:15:28.235666  493385 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295174.pem
	I1102 14:15:28.239521  493385 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 13:20 /usr/share/ca-certificates/295174.pem
	I1102 14:15:28.239585  493385 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295174.pem
	I1102 14:15:28.280749  493385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295174.pem /etc/ssl/certs/51391683.0"
	I1102 14:15:28.288790  493385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951742.pem && ln -fs /usr/share/ca-certificates/2951742.pem /etc/ssl/certs/2951742.pem"
	I1102 14:15:28.297313  493385 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951742.pem
	I1102 14:15:28.301302  493385 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 13:20 /usr/share/ca-certificates/2951742.pem
	I1102 14:15:28.301429  493385 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951742.pem
	I1102 14:15:28.344877  493385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951742.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 14:15:28.353012  493385 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1102 14:15:28.356513  493385 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1102 14:15:28.360133  493385 ssh_runner.go:195] Run: cat /version.json
	I1102 14:15:28.360181  493385 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1102 14:15:28.461615  493385 ssh_runner.go:195] Run: systemctl --version
	I1102 14:15:28.468090  493385 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1102 14:15:28.504689  493385 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1102 14:15:28.509225  493385 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1102 14:15:28.509302  493385 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1102 14:15:28.517242  493385 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1102 14:15:28.517271  493385 start.go:496] detecting cgroup driver to use...
	I1102 14:15:28.517302  493385 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1102 14:15:28.517347  493385 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1102 14:15:28.532759  493385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1102 14:15:28.546192  493385 docker.go:218] disabling cri-docker service (if available) ...
	I1102 14:15:28.546296  493385 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1102 14:15:28.562232  493385 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1102 14:15:28.576174  493385 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1102 14:15:28.720333  493385 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1102 14:15:28.837817  493385 docker.go:234] disabling docker service ...
	I1102 14:15:28.837895  493385 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1102 14:15:28.854926  493385 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1102 14:15:28.869395  493385 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1102 14:15:28.992265  493385 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1102 14:15:29.113452  493385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1102 14:15:29.127305  493385 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1102 14:15:29.142497  493385 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1102 14:15:29.142653  493385 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:15:29.152574  493385 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1102 14:15:29.152660  493385 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:15:29.162028  493385 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:15:29.171235  493385 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:15:29.180455  493385 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1102 14:15:29.188740  493385 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:15:29.197664  493385 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:15:29.206135  493385 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:15:29.214796  493385 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1102 14:15:29.222402  493385 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1102 14:15:29.229985  493385 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 14:15:29.345459  493385 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1102 14:15:29.485149  493385 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1102 14:15:29.485272  493385 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1102 14:15:29.489249  493385 start.go:564] Will wait 60s for crictl version
	I1102 14:15:29.489359  493385 ssh_runner.go:195] Run: which crictl
	I1102 14:15:29.493617  493385 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1102 14:15:29.520038  493385 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1102 14:15:29.520138  493385 ssh_runner.go:195] Run: crio --version
	I1102 14:15:29.553035  493385 ssh_runner.go:195] Run: crio --version
	I1102 14:15:29.587018  493385 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1102 14:15:25.460787  490066 node_ready.go:57] node "default-k8s-diff-port-786183" has "Ready":"False" status (will retry)
	W1102 14:15:27.961332  490066 node_ready.go:57] node "default-k8s-diff-port-786183" has "Ready":"False" status (will retry)
	I1102 14:15:29.589901  493385 cli_runner.go:164] Run: docker network inspect embed-certs-955646 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 14:15:29.604866  493385 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1102 14:15:29.608965  493385 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 14:15:29.618794  493385 kubeadm.go:884] updating cluster {Name:embed-certs-955646 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-955646 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1102 14:15:29.618909  493385 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 14:15:29.618979  493385 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 14:15:29.665595  493385 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 14:15:29.665620  493385 crio.go:433] Images already preloaded, skipping extraction
	I1102 14:15:29.665688  493385 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 14:15:29.694205  493385 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 14:15:29.694230  493385 cache_images.go:86] Images are preloaded, skipping loading
	I1102 14:15:29.694239  493385 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1102 14:15:29.694346  493385 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-955646 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-955646 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1102 14:15:29.694431  493385 ssh_runner.go:195] Run: crio config
	I1102 14:15:29.776598  493385 cni.go:84] Creating CNI manager for ""
	I1102 14:15:29.776621  493385 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 14:15:29.776641  493385 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1102 14:15:29.776685  493385 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-955646 NodeName:embed-certs-955646 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1102 14:15:29.776848  493385 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-955646"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1102 14:15:29.776953  493385 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1102 14:15:29.784852  493385 binaries.go:44] Found k8s binaries, skipping transfer
	I1102 14:15:29.784971  493385 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1102 14:15:29.792686  493385 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1102 14:15:29.806034  493385 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1102 14:15:29.819368  493385 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2215 bytes)
	I1102 14:15:29.832321  493385 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1102 14:15:29.835909  493385 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 14:15:29.845688  493385 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 14:15:29.967792  493385 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 14:15:29.984060  493385 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646 for IP: 192.168.85.2
	I1102 14:15:29.984134  493385 certs.go:195] generating shared ca certs ...
	I1102 14:15:29.984165  493385 certs.go:227] acquiring lock for ca certs: {Name:mkead50075949a3cdc798f9c0149a2bc2638cbbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:15:29.984360  493385 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-293314/.minikube/ca.key
	I1102 14:15:29.984439  493385 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.key
	I1102 14:15:29.984480  493385 certs.go:257] generating profile certs ...
	I1102 14:15:29.984617  493385 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/client.key
	I1102 14:15:29.984743  493385 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/apiserver.key.07905a59
	I1102 14:15:29.984837  493385 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/proxy-client.key
	I1102 14:15:29.984995  493385 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem (1338 bytes)
	W1102 14:15:29.985062  493385 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174_empty.pem, impossibly tiny 0 bytes
	I1102 14:15:29.985087  493385 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem (1679 bytes)
	I1102 14:15:29.985155  493385 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem (1082 bytes)
	I1102 14:15:29.985218  493385 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem (1123 bytes)
	I1102 14:15:29.985286  493385 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem (1675 bytes)
	I1102 14:15:29.985388  493385 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem (1708 bytes)
	I1102 14:15:29.986116  493385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1102 14:15:30.007909  493385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1102 14:15:30.050425  493385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1102 14:15:30.078919  493385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1102 14:15:30.112245  493385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1102 14:15:30.137597  493385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1102 14:15:30.162112  493385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1102 14:15:30.184971  493385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/embed-certs-955646/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1102 14:15:30.214058  493385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 14:15:30.237545  493385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem --> /usr/share/ca-certificates/295174.pem (1338 bytes)
	I1102 14:15:30.257795  493385 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem --> /usr/share/ca-certificates/2951742.pem (1708 bytes)
	I1102 14:15:30.283279  493385 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1102 14:15:30.298508  493385 ssh_runner.go:195] Run: openssl version
	I1102 14:15:30.305816  493385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 14:15:30.314716  493385 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:15:30.319101  493385 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 13:13 /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:15:30.319183  493385 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:15:30.363023  493385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 14:15:30.372653  493385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295174.pem && ln -fs /usr/share/ca-certificates/295174.pem /etc/ssl/certs/295174.pem"
	I1102 14:15:30.383758  493385 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295174.pem
	I1102 14:15:30.387910  493385 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 13:20 /usr/share/ca-certificates/295174.pem
	I1102 14:15:30.387974  493385 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295174.pem
	I1102 14:15:30.429423  493385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295174.pem /etc/ssl/certs/51391683.0"
	I1102 14:15:30.437709  493385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951742.pem && ln -fs /usr/share/ca-certificates/2951742.pem /etc/ssl/certs/2951742.pem"
	I1102 14:15:30.446041  493385 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951742.pem
	I1102 14:15:30.449900  493385 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 13:20 /usr/share/ca-certificates/2951742.pem
	I1102 14:15:30.449972  493385 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951742.pem
	I1102 14:15:30.496541  493385 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951742.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 14:15:30.510279  493385 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1102 14:15:30.514679  493385 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1102 14:15:30.556783  493385 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1102 14:15:30.597834  493385 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1102 14:15:30.638596  493385 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1102 14:15:30.680620  493385 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1102 14:15:30.733169  493385 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1102 14:15:30.787273  493385 kubeadm.go:401] StartCluster: {Name:embed-certs-955646 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-955646 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 14:15:30.787431  493385 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 14:15:30.787528  493385 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 14:15:30.857389  493385 cri.go:89] found id: "b2cda95c0fa73867332aa42c9cd6aad92c60f000d6837089bac2ad63937e9752"
	I1102 14:15:30.857456  493385 cri.go:89] found id: ""
	I1102 14:15:30.857571  493385 ssh_runner.go:195] Run: sudo runc list -f json
	W1102 14:15:30.888930  493385 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T14:15:30Z" level=error msg="open /run/runc: no such file or directory"
	I1102 14:15:30.889060  493385 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1102 14:15:30.914754  493385 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1102 14:15:30.914827  493385 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1102 14:15:30.914909  493385 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1102 14:15:30.927025  493385 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1102 14:15:30.927661  493385 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-955646" does not appear in /home/jenkins/minikube-integration/21808-293314/kubeconfig
	I1102 14:15:30.927977  493385 kubeconfig.go:62] /home/jenkins/minikube-integration/21808-293314/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-955646" cluster setting kubeconfig missing "embed-certs-955646" context setting]
	I1102 14:15:30.928499  493385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/kubeconfig: {Name:mke5a65554da8fc0fd6a2ea60bed899d5b38ce09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:15:30.930097  493385 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1102 14:15:30.951812  493385 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1102 14:15:30.951887  493385 kubeadm.go:602] duration metric: took 37.040102ms to restartPrimaryControlPlane
	I1102 14:15:30.951911  493385 kubeadm.go:403] duration metric: took 164.648132ms to StartCluster
	I1102 14:15:30.951955  493385 settings.go:142] acquiring lock: {Name:mk95f66b3b15e63f58f8c9085c1ffe67cc396dc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:15:30.952037  493385 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-293314/kubeconfig
	I1102 14:15:30.953332  493385 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/kubeconfig: {Name:mke5a65554da8fc0fd6a2ea60bed899d5b38ce09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:15:30.953617  493385 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 14:15:30.954041  493385 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1102 14:15:30.954189  493385 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-955646"
	I1102 14:15:30.954211  493385 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-955646"
	W1102 14:15:30.954218  493385 addons.go:248] addon storage-provisioner should already be in state true
	I1102 14:15:30.954241  493385 host.go:66] Checking if "embed-certs-955646" exists ...
	I1102 14:15:30.954779  493385 cli_runner.go:164] Run: docker container inspect embed-certs-955646 --format={{.State.Status}}
	I1102 14:15:30.954930  493385 addons.go:70] Setting dashboard=true in profile "embed-certs-955646"
	I1102 14:15:30.954949  493385 addons.go:239] Setting addon dashboard=true in "embed-certs-955646"
	W1102 14:15:30.954956  493385 addons.go:248] addon dashboard should already be in state true
	I1102 14:15:30.954978  493385 host.go:66] Checking if "embed-certs-955646" exists ...
	I1102 14:15:30.954111  493385 config.go:182] Loaded profile config "embed-certs-955646": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:15:30.955623  493385 addons.go:70] Setting default-storageclass=true in profile "embed-certs-955646"
	I1102 14:15:30.955671  493385 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-955646"
	I1102 14:15:30.955987  493385 cli_runner.go:164] Run: docker container inspect embed-certs-955646 --format={{.State.Status}}
	I1102 14:15:30.956045  493385 cli_runner.go:164] Run: docker container inspect embed-certs-955646 --format={{.State.Status}}
	I1102 14:15:30.964546  493385 out.go:179] * Verifying Kubernetes components...
	I1102 14:15:30.968403  493385 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 14:15:31.020924  493385 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 14:15:31.022704  493385 addons.go:239] Setting addon default-storageclass=true in "embed-certs-955646"
	W1102 14:15:31.022734  493385 addons.go:248] addon default-storageclass should already be in state true
	I1102 14:15:31.022781  493385 host.go:66] Checking if "embed-certs-955646" exists ...
	I1102 14:15:31.023236  493385 cli_runner.go:164] Run: docker container inspect embed-certs-955646 --format={{.State.Status}}
	I1102 14:15:31.026536  493385 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 14:15:31.026567  493385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1102 14:15:31.026667  493385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-955646
	I1102 14:15:31.029375  493385 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1102 14:15:31.038887  493385 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1102 14:15:31.042894  493385 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1102 14:15:31.042951  493385 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1102 14:15:31.043029  493385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-955646
	I1102 14:15:31.053826  493385 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1102 14:15:31.053872  493385 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1102 14:15:31.053938  493385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-955646
	I1102 14:15:31.093312  493385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/embed-certs-955646/id_rsa Username:docker}
	I1102 14:15:31.107798  493385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/embed-certs-955646/id_rsa Username:docker}
	I1102 14:15:31.109972  493385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/embed-certs-955646/id_rsa Username:docker}
	I1102 14:15:31.324488  493385 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 14:15:31.351276  493385 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1102 14:15:31.355531  493385 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 14:15:31.373470  493385 node_ready.go:35] waiting up to 6m0s for node "embed-certs-955646" to be "Ready" ...
	I1102 14:15:31.447788  493385 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1102 14:15:31.447810  493385 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1102 14:15:31.507519  493385 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1102 14:15:31.507592  493385 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1102 14:15:31.600740  493385 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1102 14:15:31.600817  493385 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1102 14:15:31.635939  493385 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1102 14:15:31.636012  493385 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1102 14:15:31.665675  493385 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1102 14:15:31.665745  493385 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1102 14:15:31.692069  493385 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1102 14:15:31.692137  493385 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1102 14:15:31.721527  493385 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1102 14:15:31.721597  493385 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1102 14:15:31.739635  493385 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1102 14:15:31.739703  493385 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1102 14:15:31.758133  493385 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 14:15:31.758203  493385 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1102 14:15:31.785698  493385 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1102 14:15:30.461318  490066 node_ready.go:57] node "default-k8s-diff-port-786183" has "Ready":"False" status (will retry)
	W1102 14:15:32.960005  490066 node_ready.go:57] node "default-k8s-diff-port-786183" has "Ready":"False" status (will retry)
	W1102 14:15:34.960724  490066 node_ready.go:57] node "default-k8s-diff-port-786183" has "Ready":"False" status (will retry)
	I1102 14:15:36.238244  493385 node_ready.go:49] node "embed-certs-955646" is "Ready"
	I1102 14:15:36.238271  493385 node_ready.go:38] duration metric: took 4.864746088s for node "embed-certs-955646" to be "Ready" ...
	I1102 14:15:36.238285  493385 api_server.go:52] waiting for apiserver process to appear ...
	I1102 14:15:36.238343  493385 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 14:15:36.574608  493385 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.223257609s)
	I1102 14:15:37.758794  493385 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.403186201s)
	I1102 14:15:37.758965  493385 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.973188026s)
	I1102 14:15:37.759194  493385 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.520839525s)
	I1102 14:15:37.759239  493385 api_server.go:72] duration metric: took 6.80556875s to wait for apiserver process to appear ...
	I1102 14:15:37.759260  493385 api_server.go:88] waiting for apiserver healthz status ...
	I1102 14:15:37.759306  493385 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1102 14:15:37.762835  493385 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-955646 addons enable metrics-server
	
	I1102 14:15:37.766376  493385 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1102 14:15:37.769410  493385 addons.go:515] duration metric: took 6.815354527s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1102 14:15:37.779534  493385 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 14:15:37.779563  493385 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 14:15:36.961398  490066 node_ready.go:57] node "default-k8s-diff-port-786183" has "Ready":"False" status (will retry)
	W1102 14:15:39.460440  490066 node_ready.go:57] node "default-k8s-diff-port-786183" has "Ready":"False" status (will retry)
	I1102 14:15:38.260332  493385 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1102 14:15:38.268872  493385 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1102 14:15:38.269835  493385 api_server.go:141] control plane version: v1.34.1
	I1102 14:15:38.269858  493385 api_server.go:131] duration metric: took 510.577353ms to wait for apiserver health ...
	I1102 14:15:38.269868  493385 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 14:15:38.273188  493385 system_pods.go:59] 8 kube-system pods found
	I1102 14:15:38.273228  493385 system_pods.go:61] "coredns-66bc5c9577-h7hk7" [9aa7532b-e3d5-400c-aed7-e9a650360cbb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 14:15:38.273237  493385 system_pods.go:61] "etcd-embed-certs-955646" [87bf4ca4-25bc-43b4-8570-b5eca3eede89] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 14:15:38.273245  493385 system_pods.go:61] "kindnet-fvxzq" [9738d225-9797-4e3e-abf3-6f04f63c0a9b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1102 14:15:38.273260  493385 system_pods.go:61] "kube-apiserver-embed-certs-955646" [b391b75e-95ca-494b-b02f-7f3d76d7b971] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 14:15:38.273273  493385 system_pods.go:61] "kube-controller-manager-embed-certs-955646" [59f3ec73-aedb-4421-84ce-2772c47a3388] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 14:15:38.273283  493385 system_pods.go:61] "kube-proxy-hg44j" [b0fcfb9f-3864-406a-b0ec-c9c56864fcbd] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1102 14:15:38.273294  493385 system_pods.go:61] "kube-scheduler-embed-certs-955646" [4e908594-08ca-438d-8277-b65e9e87ef49] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 14:15:38.273301  493385 system_pods.go:61] "storage-provisioner" [3b29b058-9c13-4c95-9c23-b738213b2020] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 14:15:38.273313  493385 system_pods.go:74] duration metric: took 3.438409ms to wait for pod list to return data ...
	I1102 14:15:38.273321  493385 default_sa.go:34] waiting for default service account to be created ...
	I1102 14:15:38.275838  493385 default_sa.go:45] found service account: "default"
	I1102 14:15:38.275860  493385 default_sa.go:55] duration metric: took 2.53344ms for default service account to be created ...
	I1102 14:15:38.275870  493385 system_pods.go:116] waiting for k8s-apps to be running ...
	I1102 14:15:38.278830  493385 system_pods.go:86] 8 kube-system pods found
	I1102 14:15:38.278864  493385 system_pods.go:89] "coredns-66bc5c9577-h7hk7" [9aa7532b-e3d5-400c-aed7-e9a650360cbb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 14:15:38.278874  493385 system_pods.go:89] "etcd-embed-certs-955646" [87bf4ca4-25bc-43b4-8570-b5eca3eede89] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 14:15:38.278882  493385 system_pods.go:89] "kindnet-fvxzq" [9738d225-9797-4e3e-abf3-6f04f63c0a9b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1102 14:15:38.278889  493385 system_pods.go:89] "kube-apiserver-embed-certs-955646" [b391b75e-95ca-494b-b02f-7f3d76d7b971] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 14:15:38.278903  493385 system_pods.go:89] "kube-controller-manager-embed-certs-955646" [59f3ec73-aedb-4421-84ce-2772c47a3388] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 14:15:38.278911  493385 system_pods.go:89] "kube-proxy-hg44j" [b0fcfb9f-3864-406a-b0ec-c9c56864fcbd] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1102 14:15:38.278926  493385 system_pods.go:89] "kube-scheduler-embed-certs-955646" [4e908594-08ca-438d-8277-b65e9e87ef49] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 14:15:38.278932  493385 system_pods.go:89] "storage-provisioner" [3b29b058-9c13-4c95-9c23-b738213b2020] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 14:15:38.278939  493385 system_pods.go:126] duration metric: took 3.063496ms to wait for k8s-apps to be running ...
	I1102 14:15:38.278953  493385 system_svc.go:44] waiting for kubelet service to be running ....
	I1102 14:15:38.279005  493385 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 14:15:38.328518  493385 system_svc.go:56] duration metric: took 49.555968ms WaitForService to wait for kubelet
	I1102 14:15:38.328547  493385 kubeadm.go:587] duration metric: took 7.37487715s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 14:15:38.328567  493385 node_conditions.go:102] verifying NodePressure condition ...
	I1102 14:15:38.335210  493385 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1102 14:15:38.335254  493385 node_conditions.go:123] node cpu capacity is 2
	I1102 14:15:38.335265  493385 node_conditions.go:105] duration metric: took 6.692118ms to run NodePressure ...
	I1102 14:15:38.335278  493385 start.go:242] waiting for startup goroutines ...
	I1102 14:15:38.335285  493385 start.go:247] waiting for cluster config update ...
	I1102 14:15:38.335296  493385 start.go:256] writing updated cluster config ...
	I1102 14:15:38.335563  493385 ssh_runner.go:195] Run: rm -f paused
	I1102 14:15:38.340855  493385 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 14:15:38.352788  493385 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-h7hk7" in "kube-system" namespace to be "Ready" or be gone ...
	W1102 14:15:40.377255  493385 pod_ready.go:104] pod "coredns-66bc5c9577-h7hk7" is not "Ready", error: <nil>
	W1102 14:15:42.859179  493385 pod_ready.go:104] pod "coredns-66bc5c9577-h7hk7" is not "Ready", error: <nil>
	W1102 14:15:41.461174  490066 node_ready.go:57] node "default-k8s-diff-port-786183" has "Ready":"False" status (will retry)
	W1102 14:15:43.961251  490066 node_ready.go:57] node "default-k8s-diff-port-786183" has "Ready":"False" status (will retry)
	W1102 14:15:45.366946  493385 pod_ready.go:104] pod "coredns-66bc5c9577-h7hk7" is not "Ready", error: <nil>
	W1102 14:15:47.859384  493385 pod_ready.go:104] pod "coredns-66bc5c9577-h7hk7" is not "Ready", error: <nil>
	W1102 14:15:46.461924  490066 node_ready.go:57] node "default-k8s-diff-port-786183" has "Ready":"False" status (will retry)
	W1102 14:15:48.960625  490066 node_ready.go:57] node "default-k8s-diff-port-786183" has "Ready":"False" status (will retry)
	W1102 14:15:49.862097  493385 pod_ready.go:104] pod "coredns-66bc5c9577-h7hk7" is not "Ready", error: <nil>
	W1102 14:15:52.358403  493385 pod_ready.go:104] pod "coredns-66bc5c9577-h7hk7" is not "Ready", error: <nil>
	W1102 14:15:50.962156  490066 node_ready.go:57] node "default-k8s-diff-port-786183" has "Ready":"False" status (will retry)
	W1102 14:15:53.460724  490066 node_ready.go:57] node "default-k8s-diff-port-786183" has "Ready":"False" status (will retry)
	W1102 14:15:54.358921  493385 pod_ready.go:104] pod "coredns-66bc5c9577-h7hk7" is not "Ready", error: <nil>
	W1102 14:15:56.858730  493385 pod_ready.go:104] pod "coredns-66bc5c9577-h7hk7" is not "Ready", error: <nil>
	I1102 14:15:55.462185  490066 node_ready.go:49] node "default-k8s-diff-port-786183" is "Ready"
	I1102 14:15:55.462213  490066 node_ready.go:38] duration metric: took 41.004780898s for node "default-k8s-diff-port-786183" to be "Ready" ...
	I1102 14:15:55.462227  490066 api_server.go:52] waiting for apiserver process to appear ...
	I1102 14:15:55.462282  490066 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 14:15:55.477807  490066 api_server.go:72] duration metric: took 42.105340458s to wait for apiserver process to appear ...
	I1102 14:15:55.477835  490066 api_server.go:88] waiting for apiserver healthz status ...
	I1102 14:15:55.477858  490066 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1102 14:15:55.486195  490066 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1102 14:15:55.487352  490066 api_server.go:141] control plane version: v1.34.1
	I1102 14:15:55.487383  490066 api_server.go:131] duration metric: took 9.540082ms to wait for apiserver health ...
	I1102 14:15:55.487392  490066 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 14:15:55.490400  490066 system_pods.go:59] 8 kube-system pods found
	I1102 14:15:55.490433  490066 system_pods.go:61] "coredns-66bc5c9577-lwp97" [cd5d24d1-8139-448c-9016-c89db9315328] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 14:15:55.490441  490066 system_pods.go:61] "etcd-default-k8s-diff-port-786183" [20f2055e-9a44-4af9-ac93-0de08a0929dd] Running
	I1102 14:15:55.490449  490066 system_pods.go:61] "kindnet-pd47j" [2faa4679-6556-4e51-a2a3-88275ddc1fff] Running
	I1102 14:15:55.490453  490066 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-786183" [3820bf6d-1505-48a7-b001-f8d7a0b87b6a] Running
	I1102 14:15:55.490458  490066 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-786183" [cfc3d639-0d89-4860-9f16-633cf0079a2b] Running
	I1102 14:15:55.490464  490066 system_pods.go:61] "kube-proxy-jlf8q" [ffabcc04-6bec-42eb-a759-aeea07668e18] Running
	I1102 14:15:55.490473  490066 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-786183" [a75b2a9e-b854-4624-ac22-8fb38c2173dc] Running
	I1102 14:15:55.490479  490066 system_pods.go:61] "storage-provisioner" [d79c0f13-8bac-4de0-9847-059f608dbabb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 14:15:55.490491  490066 system_pods.go:74] duration metric: took 3.093397ms to wait for pod list to return data ...
	I1102 14:15:55.490501  490066 default_sa.go:34] waiting for default service account to be created ...
	I1102 14:15:55.493744  490066 default_sa.go:45] found service account: "default"
	I1102 14:15:55.493767  490066 default_sa.go:55] duration metric: took 3.257295ms for default service account to be created ...
	I1102 14:15:55.493777  490066 system_pods.go:116] waiting for k8s-apps to be running ...
	I1102 14:15:55.496683  490066 system_pods.go:86] 8 kube-system pods found
	I1102 14:15:55.496720  490066 system_pods.go:89] "coredns-66bc5c9577-lwp97" [cd5d24d1-8139-448c-9016-c89db9315328] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 14:15:55.496728  490066 system_pods.go:89] "etcd-default-k8s-diff-port-786183" [20f2055e-9a44-4af9-ac93-0de08a0929dd] Running
	I1102 14:15:55.496736  490066 system_pods.go:89] "kindnet-pd47j" [2faa4679-6556-4e51-a2a3-88275ddc1fff] Running
	I1102 14:15:55.496741  490066 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-786183" [3820bf6d-1505-48a7-b001-f8d7a0b87b6a] Running
	I1102 14:15:55.496746  490066 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-786183" [cfc3d639-0d89-4860-9f16-633cf0079a2b] Running
	I1102 14:15:55.496750  490066 system_pods.go:89] "kube-proxy-jlf8q" [ffabcc04-6bec-42eb-a759-aeea07668e18] Running
	I1102 14:15:55.496756  490066 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-786183" [a75b2a9e-b854-4624-ac22-8fb38c2173dc] Running
	I1102 14:15:55.496762  490066 system_pods.go:89] "storage-provisioner" [d79c0f13-8bac-4de0-9847-059f608dbabb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 14:15:55.496794  490066 retry.go:31] will retry after 235.60474ms: missing components: kube-dns
	I1102 14:15:55.744476  490066 system_pods.go:86] 8 kube-system pods found
	I1102 14:15:55.744514  490066 system_pods.go:89] "coredns-66bc5c9577-lwp97" [cd5d24d1-8139-448c-9016-c89db9315328] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 14:15:55.744523  490066 system_pods.go:89] "etcd-default-k8s-diff-port-786183" [20f2055e-9a44-4af9-ac93-0de08a0929dd] Running
	I1102 14:15:55.744530  490066 system_pods.go:89] "kindnet-pd47j" [2faa4679-6556-4e51-a2a3-88275ddc1fff] Running
	I1102 14:15:55.744534  490066 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-786183" [3820bf6d-1505-48a7-b001-f8d7a0b87b6a] Running
	I1102 14:15:55.744539  490066 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-786183" [cfc3d639-0d89-4860-9f16-633cf0079a2b] Running
	I1102 14:15:55.744543  490066 system_pods.go:89] "kube-proxy-jlf8q" [ffabcc04-6bec-42eb-a759-aeea07668e18] Running
	I1102 14:15:55.744548  490066 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-786183" [a75b2a9e-b854-4624-ac22-8fb38c2173dc] Running
	I1102 14:15:55.744553  490066 system_pods.go:89] "storage-provisioner" [d79c0f13-8bac-4de0-9847-059f608dbabb] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1102 14:15:55.744572  490066 retry.go:31] will retry after 305.697509ms: missing components: kube-dns
	I1102 14:15:56.054980  490066 system_pods.go:86] 8 kube-system pods found
	I1102 14:15:56.055015  490066 system_pods.go:89] "coredns-66bc5c9577-lwp97" [cd5d24d1-8139-448c-9016-c89db9315328] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 14:15:56.055024  490066 system_pods.go:89] "etcd-default-k8s-diff-port-786183" [20f2055e-9a44-4af9-ac93-0de08a0929dd] Running
	I1102 14:15:56.055029  490066 system_pods.go:89] "kindnet-pd47j" [2faa4679-6556-4e51-a2a3-88275ddc1fff] Running
	I1102 14:15:56.055034  490066 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-786183" [3820bf6d-1505-48a7-b001-f8d7a0b87b6a] Running
	I1102 14:15:56.055039  490066 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-786183" [cfc3d639-0d89-4860-9f16-633cf0079a2b] Running
	I1102 14:15:56.055043  490066 system_pods.go:89] "kube-proxy-jlf8q" [ffabcc04-6bec-42eb-a759-aeea07668e18] Running
	I1102 14:15:56.055048  490066 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-786183" [a75b2a9e-b854-4624-ac22-8fb38c2173dc] Running
	I1102 14:15:56.055053  490066 system_pods.go:89] "storage-provisioner" [d79c0f13-8bac-4de0-9847-059f608dbabb] Running
	I1102 14:15:56.055065  490066 system_pods.go:126] duration metric: took 561.282727ms to wait for k8s-apps to be running ...
	I1102 14:15:56.055073  490066 system_svc.go:44] waiting for kubelet service to be running ....
	I1102 14:15:56.055134  490066 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 14:15:56.068377  490066 system_svc.go:56] duration metric: took 13.286794ms WaitForService to wait for kubelet
	I1102 14:15:56.068406  490066 kubeadm.go:587] duration metric: took 42.695943335s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 14:15:56.068427  490066 node_conditions.go:102] verifying NodePressure condition ...
	I1102 14:15:56.071466  490066 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1102 14:15:56.071498  490066 node_conditions.go:123] node cpu capacity is 2
	I1102 14:15:56.071513  490066 node_conditions.go:105] duration metric: took 3.080531ms to run NodePressure ...
	I1102 14:15:56.071525  490066 start.go:242] waiting for startup goroutines ...
	I1102 14:15:56.071533  490066 start.go:247] waiting for cluster config update ...
	I1102 14:15:56.071551  490066 start.go:256] writing updated cluster config ...
	I1102 14:15:56.071844  490066 ssh_runner.go:195] Run: rm -f paused
	I1102 14:15:56.075786  490066 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 14:15:56.080045  490066 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lwp97" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:15:57.085423  490066 pod_ready.go:94] pod "coredns-66bc5c9577-lwp97" is "Ready"
	I1102 14:15:57.085511  490066 pod_ready.go:86] duration metric: took 1.005440865s for pod "coredns-66bc5c9577-lwp97" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:15:57.088455  490066 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-786183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:15:57.093301  490066 pod_ready.go:94] pod "etcd-default-k8s-diff-port-786183" is "Ready"
	I1102 14:15:57.093330  490066 pod_ready.go:86] duration metric: took 4.847325ms for pod "etcd-default-k8s-diff-port-786183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:15:57.095644  490066 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-786183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:15:57.100186  490066 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-786183" is "Ready"
	I1102 14:15:57.100252  490066 pod_ready.go:86] duration metric: took 4.582478ms for pod "kube-apiserver-default-k8s-diff-port-786183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:15:57.102567  490066 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-786183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:15:57.283113  490066 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-786183" is "Ready"
	I1102 14:15:57.283143  490066 pod_ready.go:86] duration metric: took 180.551532ms for pod "kube-controller-manager-default-k8s-diff-port-786183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:15:57.483477  490066 pod_ready.go:83] waiting for pod "kube-proxy-jlf8q" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:15:57.883611  490066 pod_ready.go:94] pod "kube-proxy-jlf8q" is "Ready"
	I1102 14:15:57.883642  490066 pod_ready.go:86] duration metric: took 400.135704ms for pod "kube-proxy-jlf8q" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:15:58.084492  490066 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-786183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:15:58.483720  490066 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-786183" is "Ready"
	I1102 14:15:58.483751  490066 pod_ready.go:86] duration metric: took 399.233405ms for pod "kube-scheduler-default-k8s-diff-port-786183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:15:58.483765  490066 pod_ready.go:40] duration metric: took 2.40794347s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 14:15:58.535123  490066 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1102 14:15:58.538400  490066 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-786183" cluster and "default" namespace by default
	W1102 14:15:59.357995  493385 pod_ready.go:104] pod "coredns-66bc5c9577-h7hk7" is not "Ready", error: <nil>
	W1102 14:16:01.359510  493385 pod_ready.go:104] pod "coredns-66bc5c9577-h7hk7" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 02 14:15:55 default-k8s-diff-port-786183 crio[872]: time="2025-11-02T14:15:55.689961131Z" level=info msg="Created container 90ff9deb9952551e3099934935e0d096d4fa05470ba861e82f162ebd04943ca4: kube-system/coredns-66bc5c9577-lwp97/coredns" id=2f06fc34-65c3-409c-b7eb-3e887cc04746 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:15:55 default-k8s-diff-port-786183 crio[872]: time="2025-11-02T14:15:55.690935614Z" level=info msg="Starting container: 90ff9deb9952551e3099934935e0d096d4fa05470ba861e82f162ebd04943ca4" id=186b8a06-a86e-45a1-87a5-830328b04e3b name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 14:15:55 default-k8s-diff-port-786183 crio[872]: time="2025-11-02T14:15:55.696070286Z" level=info msg="Started container" PID=1767 containerID=90ff9deb9952551e3099934935e0d096d4fa05470ba861e82f162ebd04943ca4 description=kube-system/coredns-66bc5c9577-lwp97/coredns id=186b8a06-a86e-45a1-87a5-830328b04e3b name=/runtime.v1.RuntimeService/StartContainer sandboxID=6e0926bd9f35330770e85db2b2d7fe603b56e42587da76641fd16291c434f0a3
	Nov 02 14:15:59 default-k8s-diff-port-786183 crio[872]: time="2025-11-02T14:15:59.099027334Z" level=info msg="Running pod sandbox: default/busybox/POD" id=fc332fb0-c242-4246-bb90-0007f5ed0092 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 14:15:59 default-k8s-diff-port-786183 crio[872]: time="2025-11-02T14:15:59.099096037Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:15:59 default-k8s-diff-port-786183 crio[872]: time="2025-11-02T14:15:59.104266041Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:4c08e37b2121d847137cfde208b19f9109135e5c030b3ad0636ce71d5cc6d787 UID:948bb4bd-a717-4efb-ab1a-c2f257304113 NetNS:/var/run/netns/7b051271-f8b4-48fb-9c1d-3975a8f79b2d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d0d0}] Aliases:map[]}"
	Nov 02 14:15:59 default-k8s-diff-port-786183 crio[872]: time="2025-11-02T14:15:59.104300338Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 02 14:15:59 default-k8s-diff-port-786183 crio[872]: time="2025-11-02T14:15:59.115837256Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:4c08e37b2121d847137cfde208b19f9109135e5c030b3ad0636ce71d5cc6d787 UID:948bb4bd-a717-4efb-ab1a-c2f257304113 NetNS:/var/run/netns/7b051271-f8b4-48fb-9c1d-3975a8f79b2d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d0d0}] Aliases:map[]}"
	Nov 02 14:15:59 default-k8s-diff-port-786183 crio[872]: time="2025-11-02T14:15:59.115985754Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 02 14:15:59 default-k8s-diff-port-786183 crio[872]: time="2025-11-02T14:15:59.119248958Z" level=info msg="Ran pod sandbox 4c08e37b2121d847137cfde208b19f9109135e5c030b3ad0636ce71d5cc6d787 with infra container: default/busybox/POD" id=fc332fb0-c242-4246-bb90-0007f5ed0092 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 14:15:59 default-k8s-diff-port-786183 crio[872]: time="2025-11-02T14:15:59.122361743Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=35667e42-7b10-4754-b5a2-a5d62d557ea5 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:15:59 default-k8s-diff-port-786183 crio[872]: time="2025-11-02T14:15:59.122484961Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=35667e42-7b10-4754-b5a2-a5d62d557ea5 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:15:59 default-k8s-diff-port-786183 crio[872]: time="2025-11-02T14:15:59.12252204Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=35667e42-7b10-4754-b5a2-a5d62d557ea5 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:15:59 default-k8s-diff-port-786183 crio[872]: time="2025-11-02T14:15:59.123757999Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2aac2507-39f1-40c7-8452-05e04c44a048 name=/runtime.v1.ImageService/PullImage
	Nov 02 14:15:59 default-k8s-diff-port-786183 crio[872]: time="2025-11-02T14:15:59.126149112Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 02 14:16:01 default-k8s-diff-port-786183 crio[872]: time="2025-11-02T14:16:01.187996372Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=2aac2507-39f1-40c7-8452-05e04c44a048 name=/runtime.v1.ImageService/PullImage
	Nov 02 14:16:01 default-k8s-diff-port-786183 crio[872]: time="2025-11-02T14:16:01.189055738Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=01e9ec67-94c8-4219-852b-8dd78a5393fe name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:16:01 default-k8s-diff-port-786183 crio[872]: time="2025-11-02T14:16:01.191845485Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4b1fae49-4012-4cb6-9499-9c252a608e7b name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:16:01 default-k8s-diff-port-786183 crio[872]: time="2025-11-02T14:16:01.199393416Z" level=info msg="Creating container: default/busybox/busybox" id=9dffd5f7-1d27-4090-a6d0-f6d5fb6b9755 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:16:01 default-k8s-diff-port-786183 crio[872]: time="2025-11-02T14:16:01.199525495Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:16:01 default-k8s-diff-port-786183 crio[872]: time="2025-11-02T14:16:01.204700455Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:16:01 default-k8s-diff-port-786183 crio[872]: time="2025-11-02T14:16:01.205384129Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:16:01 default-k8s-diff-port-786183 crio[872]: time="2025-11-02T14:16:01.221650415Z" level=info msg="Created container 712b36cd9c3027c8057746d065733d616d5327b25484709156ef01cdd919584a: default/busybox/busybox" id=9dffd5f7-1d27-4090-a6d0-f6d5fb6b9755 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:16:01 default-k8s-diff-port-786183 crio[872]: time="2025-11-02T14:16:01.222834173Z" level=info msg="Starting container: 712b36cd9c3027c8057746d065733d616d5327b25484709156ef01cdd919584a" id=5d012552-b3b8-442f-91c2-c8c9f347111e name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 14:16:01 default-k8s-diff-port-786183 crio[872]: time="2025-11-02T14:16:01.224510702Z" level=info msg="Started container" PID=1819 containerID=712b36cd9c3027c8057746d065733d616d5327b25484709156ef01cdd919584a description=default/busybox/busybox id=5d012552-b3b8-442f-91c2-c8c9f347111e name=/runtime.v1.RuntimeService/StartContainer sandboxID=4c08e37b2121d847137cfde208b19f9109135e5c030b3ad0636ce71d5cc6d787
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	712b36cd9c302       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e   7 seconds ago        Running             busybox                   0                   4c08e37b2121d       busybox                                                default
	90ff9deb99525       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                      12 seconds ago       Running             coredns                   0                   6e0926bd9f353       coredns-66bc5c9577-lwp97                               kube-system
	3d2def327e5c1       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                      12 seconds ago       Running             storage-provisioner       0                   4f9fab5195a80       storage-provisioner                                    kube-system
	7112533e77a77       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                      53 seconds ago       Running             kindnet-cni               0                   81267fb9f90f3       kindnet-pd47j                                          kube-system
	e7c59be2355a5       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                      53 seconds ago       Running             kube-proxy                0                   99a1b9efb16ff       kube-proxy-jlf8q                                       kube-system
	839f3d44270da       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                      About a minute ago   Running             etcd                      0                   eea50887a0a87       etcd-default-k8s-diff-port-786183                      kube-system
	ec079c44d0e71       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                      About a minute ago   Running             kube-controller-manager   0                   f52d6e7580731       kube-controller-manager-default-k8s-diff-port-786183   kube-system
	56c7e908e09f5       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                      About a minute ago   Running             kube-apiserver            0                   85d4f47a745a7       kube-apiserver-default-k8s-diff-port-786183            kube-system
	50012152cfc1b       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                      About a minute ago   Running             kube-scheduler            0                   55bd2081b6b04       kube-scheduler-default-k8s-diff-port-786183            kube-system
	
	
	==> coredns [90ff9deb9952551e3099934935e0d096d4fa05470ba861e82f162ebd04943ca4] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48411 - 45735 "HINFO IN 5111527077003794027.8155491235810293265. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.030886926s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-786183
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-786183
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a
	                    minikube.k8s.io/name=default-k8s-diff-port-786183
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_02T14_15_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 02 Nov 2025 14:15:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-786183
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 02 Nov 2025 14:15:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 02 Nov 2025 14:15:55 +0000   Sun, 02 Nov 2025 14:15:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 02 Nov 2025 14:15:55 +0000   Sun, 02 Nov 2025 14:15:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 02 Nov 2025 14:15:55 +0000   Sun, 02 Nov 2025 14:15:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 02 Nov 2025 14:15:55 +0000   Sun, 02 Nov 2025 14:15:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-786183
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                0782cb70-5112-4773-81bc-acca336842b5
	  Boot ID:                    4c303cf4-c0b6-4c0d-aa1a-d7509feb15e7
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-lwp97                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     55s
	  kube-system                 etcd-default-k8s-diff-port-786183                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         60s
	  kube-system                 kindnet-pd47j                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-default-k8s-diff-port-786183             250m (12%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-786183    200m (10%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-jlf8q                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-default-k8s-diff-port-786183             100m (5%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 53s   kube-proxy       
	  Normal   Starting                 60s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  60s   kubelet          Node default-k8s-diff-port-786183 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s   kubelet          Node default-k8s-diff-port-786183 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s   kubelet          Node default-k8s-diff-port-786183 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s   node-controller  Node default-k8s-diff-port-786183 event: Registered Node default-k8s-diff-port-786183 in Controller
	  Normal   NodeReady                13s   kubelet          Node default-k8s-diff-port-786183 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 2 13:55] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:56] overlayfs: idmapped layers are currently not supported
	[  +3.515963] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:57] overlayfs: idmapped layers are currently not supported
	[ +24.836033] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:58] overlayfs: idmapped layers are currently not supported
	[ +23.362553] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:59] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:01] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:02] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:03] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:04] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:06] overlayfs: idmapped layers are currently not supported
	[ +50.469589] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 2 14:07] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:08] overlayfs: idmapped layers are currently not supported
	[ +11.089512] overlayfs: idmapped layers are currently not supported
	[ +33.821233] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:09] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:10] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:11] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:13] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:14] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:15] overlayfs: idmapped layers are currently not supported
	[ +29.099512] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [839f3d44270dac6e2c0179e826c060259cc3b61764fe8d99ad2858d4c362db66] <==
	{"level":"warn","ts":"2025-11-02T14:15:04.118941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:04.138224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:04.158112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:04.180939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:04.201519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:04.243556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:04.244338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:04.258408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:04.276867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:04.302961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:04.324716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:04.359720Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:04.390708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:04.414674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:04.434512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:04.449987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:04.463653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:04.483288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:04.503423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:04.518236Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:04.540089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:04.571443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:04.587693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:04.620713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:04.717847Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38720","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:16:08 up  2:58,  0 user,  load average: 3.22, 3.36, 2.94
	Linux default-k8s-diff-port-786183 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7112533e77a778a2859c4fa57054215ddd79112db029b36aefe3ea127cd3aa03] <==
	I1102 14:15:14.614243       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1102 14:15:14.618922       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1102 14:15:14.619082       1 main.go:148] setting mtu 1500 for CNI 
	I1102 14:15:14.619101       1 main.go:178] kindnetd IP family: "ipv4"
	I1102 14:15:14.619116       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-02T14:15:14Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1102 14:15:14.814992       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1102 14:15:14.815071       1 controller.go:381] "Waiting for informer caches to sync"
	I1102 14:15:14.815106       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1102 14:15:14.815801       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1102 14:15:44.815946       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1102 14:15:44.815948       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1102 14:15:44.816070       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1102 14:15:44.817265       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1102 14:15:45.916272       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1102 14:15:45.916364       1 metrics.go:72] Registering metrics
	I1102 14:15:45.916455       1 controller.go:711] "Syncing nftables rules"
	I1102 14:15:54.821715       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1102 14:15:54.821793       1 main.go:301] handling current node
	I1102 14:16:04.814692       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1102 14:16:04.814729       1 main.go:301] handling current node
	
	
	==> kube-apiserver [56c7e908e09f57c8c58af12bd21f287bccc77852e84eb555a0378b6bee974288] <==
	I1102 14:15:05.763433       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 14:15:05.769011       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1102 14:15:05.791190       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1102 14:15:05.807199       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1102 14:15:05.807871       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 14:15:05.807978       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1102 14:15:05.820495       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1102 14:15:06.426331       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1102 14:15:06.435690       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1102 14:15:06.435750       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1102 14:15:07.252039       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1102 14:15:07.303101       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1102 14:15:07.498472       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1102 14:15:07.557082       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1102 14:15:07.559800       1 controller.go:667] quota admission added evaluator for: endpoints
	I1102 14:15:07.584651       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1102 14:15:07.620506       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1102 14:15:08.301456       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1102 14:15:08.323734       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1102 14:15:08.355492       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1102 14:15:13.365040       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1102 14:15:13.551936       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1102 14:15:13.683169       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 14:15:13.734570       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1102 14:16:06.915691       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8444->192.168.76.1:41296: use of closed network connection
	
	
	==> kube-controller-manager [ec079c44d0e71f555352f4dd0682261c22a4917d35588780e2aaa03b16374639] <==
	I1102 14:15:12.613351       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 14:15:12.613405       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1102 14:15:12.613445       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1102 14:15:12.613478       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1102 14:15:12.613483       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1102 14:15:12.613489       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1102 14:15:12.622377       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1102 14:15:12.634591       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1102 14:15:12.635405       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-786183" podCIDRs=["10.244.0.0/24"]
	I1102 14:15:12.642693       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1102 14:15:12.645657       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 14:15:12.650367       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1102 14:15:12.651605       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1102 14:15:12.651894       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1102 14:15:12.653029       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1102 14:15:12.653088       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1102 14:15:12.653446       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1102 14:15:12.654678       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1102 14:15:12.654917       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1102 14:15:12.655666       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1102 14:15:12.656378       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1102 14:15:12.658218       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1102 14:15:12.670848       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1102 14:15:12.671136       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1102 14:15:57.611290       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [e7c59be2355a5756db889d379064523a1953c1913cf89b3ccea010287a739d95] <==
	I1102 14:15:14.581132       1 server_linux.go:53] "Using iptables proxy"
	I1102 14:15:14.682861       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1102 14:15:14.786967       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1102 14:15:14.787438       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1102 14:15:14.787525       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1102 14:15:14.807130       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 14:15:14.807260       1 server_linux.go:132] "Using iptables Proxier"
	I1102 14:15:14.811502       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1102 14:15:14.811871       1 server.go:527] "Version info" version="v1.34.1"
	I1102 14:15:14.812104       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 14:15:14.813431       1 config.go:200] "Starting service config controller"
	I1102 14:15:14.813538       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1102 14:15:14.813585       1 config.go:106] "Starting endpoint slice config controller"
	I1102 14:15:14.813616       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1102 14:15:14.813662       1 config.go:403] "Starting serviceCIDR config controller"
	I1102 14:15:14.813689       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1102 14:15:14.818241       1 config.go:309] "Starting node config controller"
	I1102 14:15:14.818324       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1102 14:15:14.818356       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1102 14:15:14.913663       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1102 14:15:14.913775       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1102 14:15:14.913797       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [50012152cfc1b947d9c577e75d9bdc802d3f94019052dbee71dd7fe640714403] <==
	I1102 14:15:06.615861       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 14:15:06.618321       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1102 14:15:06.618417       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 14:15:06.618449       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 14:15:06.618468       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1102 14:15:06.627057       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1102 14:15:06.627221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1102 14:15:06.627322       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1102 14:15:06.627401       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1102 14:15:06.627472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1102 14:15:06.627623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1102 14:15:06.630766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1102 14:15:06.633592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1102 14:15:06.633736       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1102 14:15:06.633828       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1102 14:15:06.633902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1102 14:15:06.634001       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1102 14:15:06.634302       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1102 14:15:06.634400       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1102 14:15:06.634495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1102 14:15:06.634584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1102 14:15:06.634823       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1102 14:15:06.635182       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1102 14:15:06.635237       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1102 14:15:07.819258       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 02 14:15:13 default-k8s-diff-port-786183 kubelet[1339]: I1102 14:15:13.543124    1339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2faa4679-6556-4e51-a2a3-88275ddc1fff-lib-modules\") pod \"kindnet-pd47j\" (UID: \"2faa4679-6556-4e51-a2a3-88275ddc1fff\") " pod="kube-system/kindnet-pd47j"
	Nov 02 14:15:13 default-k8s-diff-port-786183 kubelet[1339]: I1102 14:15:13.543170    1339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ffabcc04-6bec-42eb-a759-aeea07668e18-kube-proxy\") pod \"kube-proxy-jlf8q\" (UID: \"ffabcc04-6bec-42eb-a759-aeea07668e18\") " pod="kube-system/kube-proxy-jlf8q"
	Nov 02 14:15:13 default-k8s-diff-port-786183 kubelet[1339]: I1102 14:15:13.543188    1339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lbqt\" (UniqueName: \"kubernetes.io/projected/ffabcc04-6bec-42eb-a759-aeea07668e18-kube-api-access-5lbqt\") pod \"kube-proxy-jlf8q\" (UID: \"ffabcc04-6bec-42eb-a759-aeea07668e18\") " pod="kube-system/kube-proxy-jlf8q"
	Nov 02 14:15:13 default-k8s-diff-port-786183 kubelet[1339]: I1102 14:15:13.543233    1339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz9vr\" (UniqueName: \"kubernetes.io/projected/2faa4679-6556-4e51-a2a3-88275ddc1fff-kube-api-access-qz9vr\") pod \"kindnet-pd47j\" (UID: \"2faa4679-6556-4e51-a2a3-88275ddc1fff\") " pod="kube-system/kindnet-pd47j"
	Nov 02 14:15:13 default-k8s-diff-port-786183 kubelet[1339]: I1102 14:15:13.543252    1339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffabcc04-6bec-42eb-a759-aeea07668e18-lib-modules\") pod \"kube-proxy-jlf8q\" (UID: \"ffabcc04-6bec-42eb-a759-aeea07668e18\") " pod="kube-system/kube-proxy-jlf8q"
	Nov 02 14:15:13 default-k8s-diff-port-786183 kubelet[1339]: E1102 14:15:13.779434    1339 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 02 14:15:13 default-k8s-diff-port-786183 kubelet[1339]: E1102 14:15:13.779485    1339 projected.go:196] Error preparing data for projected volume kube-api-access-qz9vr for pod kube-system/kindnet-pd47j: configmap "kube-root-ca.crt" not found
	Nov 02 14:15:13 default-k8s-diff-port-786183 kubelet[1339]: E1102 14:15:13.779572    1339 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2faa4679-6556-4e51-a2a3-88275ddc1fff-kube-api-access-qz9vr podName:2faa4679-6556-4e51-a2a3-88275ddc1fff nodeName:}" failed. No retries permitted until 2025-11-02 14:15:14.279539753 +0000 UTC m=+6.056527906 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qz9vr" (UniqueName: "kubernetes.io/projected/2faa4679-6556-4e51-a2a3-88275ddc1fff-kube-api-access-qz9vr") pod "kindnet-pd47j" (UID: "2faa4679-6556-4e51-a2a3-88275ddc1fff") : configmap "kube-root-ca.crt" not found
	Nov 02 14:15:13 default-k8s-diff-port-786183 kubelet[1339]: E1102 14:15:13.779657    1339 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 02 14:15:13 default-k8s-diff-port-786183 kubelet[1339]: E1102 14:15:13.779667    1339 projected.go:196] Error preparing data for projected volume kube-api-access-5lbqt for pod kube-system/kube-proxy-jlf8q: configmap "kube-root-ca.crt" not found
	Nov 02 14:15:13 default-k8s-diff-port-786183 kubelet[1339]: E1102 14:15:13.779695    1339 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ffabcc04-6bec-42eb-a759-aeea07668e18-kube-api-access-5lbqt podName:ffabcc04-6bec-42eb-a759-aeea07668e18 nodeName:}" failed. No retries permitted until 2025-11-02 14:15:14.279686011 +0000 UTC m=+6.056674164 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5lbqt" (UniqueName: "kubernetes.io/projected/ffabcc04-6bec-42eb-a759-aeea07668e18-kube-api-access-5lbqt") pod "kube-proxy-jlf8q" (UID: "ffabcc04-6bec-42eb-a759-aeea07668e18") : configmap "kube-root-ca.crt" not found
	Nov 02 14:15:14 default-k8s-diff-port-786183 kubelet[1339]: I1102 14:15:14.353386    1339 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 02 14:15:14 default-k8s-diff-port-786183 kubelet[1339]: W1102 14:15:14.394673    1339 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/cf96e33bc393b6a3d6d55863f66411b90a67bf689beea00fe018be1f7c4b996e/crio-81267fb9f90f356707a9a354b982d28058f1bad134f84af4420c2fd2e4e9e25a WatchSource:0}: Error finding container 81267fb9f90f356707a9a354b982d28058f1bad134f84af4420c2fd2e4e9e25a: Status 404 returned error can't find the container with id 81267fb9f90f356707a9a354b982d28058f1bad134f84af4420c2fd2e4e9e25a
	Nov 02 14:15:14 default-k8s-diff-port-786183 kubelet[1339]: I1102 14:15:14.681773    1339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jlf8q" podStartSLOduration=1.6817554019999998 podStartE2EDuration="1.681755402s" podCreationTimestamp="2025-11-02 14:15:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 14:15:14.662539556 +0000 UTC m=+6.439527709" watchObservedRunningTime="2025-11-02 14:15:14.681755402 +0000 UTC m=+6.458743563"
	Nov 02 14:15:14 default-k8s-diff-port-786183 kubelet[1339]: I1102 14:15:14.723767    1339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-pd47j" podStartSLOduration=1.723746839 podStartE2EDuration="1.723746839s" podCreationTimestamp="2025-11-02 14:15:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 14:15:14.705161777 +0000 UTC m=+6.482149946" watchObservedRunningTime="2025-11-02 14:15:14.723746839 +0000 UTC m=+6.500734992"
	Nov 02 14:15:55 default-k8s-diff-port-786183 kubelet[1339]: I1102 14:15:55.247185    1339 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 02 14:15:55 default-k8s-diff-port-786183 kubelet[1339]: I1102 14:15:55.332117    1339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd5d24d1-8139-448c-9016-c89db9315328-config-volume\") pod \"coredns-66bc5c9577-lwp97\" (UID: \"cd5d24d1-8139-448c-9016-c89db9315328\") " pod="kube-system/coredns-66bc5c9577-lwp97"
	Nov 02 14:15:55 default-k8s-diff-port-786183 kubelet[1339]: I1102 14:15:55.332173    1339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txntp\" (UniqueName: \"kubernetes.io/projected/cd5d24d1-8139-448c-9016-c89db9315328-kube-api-access-txntp\") pod \"coredns-66bc5c9577-lwp97\" (UID: \"cd5d24d1-8139-448c-9016-c89db9315328\") " pod="kube-system/coredns-66bc5c9577-lwp97"
	Nov 02 14:15:55 default-k8s-diff-port-786183 kubelet[1339]: I1102 14:15:55.332239    1339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9nb8\" (UniqueName: \"kubernetes.io/projected/d79c0f13-8bac-4de0-9847-059f608dbabb-kube-api-access-r9nb8\") pod \"storage-provisioner\" (UID: \"d79c0f13-8bac-4de0-9847-059f608dbabb\") " pod="kube-system/storage-provisioner"
	Nov 02 14:15:55 default-k8s-diff-port-786183 kubelet[1339]: I1102 14:15:55.332268    1339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d79c0f13-8bac-4de0-9847-059f608dbabb-tmp\") pod \"storage-provisioner\" (UID: \"d79c0f13-8bac-4de0-9847-059f608dbabb\") " pod="kube-system/storage-provisioner"
	Nov 02 14:15:55 default-k8s-diff-port-786183 kubelet[1339]: W1102 14:15:55.605499    1339 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/cf96e33bc393b6a3d6d55863f66411b90a67bf689beea00fe018be1f7c4b996e/crio-4f9fab5195a80702ee39e9f1626f82e6bf9696bf571584ff30bd0adceafb8e49 WatchSource:0}: Error finding container 4f9fab5195a80702ee39e9f1626f82e6bf9696bf571584ff30bd0adceafb8e49: Status 404 returned error can't find the container with id 4f9fab5195a80702ee39e9f1626f82e6bf9696bf571584ff30bd0adceafb8e49
	Nov 02 14:15:55 default-k8s-diff-port-786183 kubelet[1339]: W1102 14:15:55.637944    1339 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/cf96e33bc393b6a3d6d55863f66411b90a67bf689beea00fe018be1f7c4b996e/crio-6e0926bd9f35330770e85db2b2d7fe603b56e42587da76641fd16291c434f0a3 WatchSource:0}: Error finding container 6e0926bd9f35330770e85db2b2d7fe603b56e42587da76641fd16291c434f0a3: Status 404 returned error can't find the container with id 6e0926bd9f35330770e85db2b2d7fe603b56e42587da76641fd16291c434f0a3
	Nov 02 14:15:55 default-k8s-diff-port-786183 kubelet[1339]: I1102 14:15:55.790129    1339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.790111721 podStartE2EDuration="41.790111721s" podCreationTimestamp="2025-11-02 14:15:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 14:15:55.767801708 +0000 UTC m=+47.544789861" watchObservedRunningTime="2025-11-02 14:15:55.790111721 +0000 UTC m=+47.567099873"
	Nov 02 14:15:56 default-k8s-diff-port-786183 kubelet[1339]: I1102 14:15:56.753554    1339 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-lwp97" podStartSLOduration=43.753522302 podStartE2EDuration="43.753522302s" podCreationTimestamp="2025-11-02 14:15:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 14:15:55.791698296 +0000 UTC m=+47.568686465" watchObservedRunningTime="2025-11-02 14:15:56.753522302 +0000 UTC m=+48.530510455"
	Nov 02 14:15:58 default-k8s-diff-port-786183 kubelet[1339]: I1102 14:15:58.852641    1339 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkn5v\" (UniqueName: \"kubernetes.io/projected/948bb4bd-a717-4efb-ab1a-c2f257304113-kube-api-access-nkn5v\") pod \"busybox\" (UID: \"948bb4bd-a717-4efb-ab1a-c2f257304113\") " pod="default/busybox"
	
	
	==> storage-provisioner [3d2def327e5c1e37ec981d6fe2aa2c4f661099b0807cc0496281b539fba4dc3b] <==
	I1102 14:15:55.695187       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1102 14:15:55.719137       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1102 14:15:55.719249       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1102 14:15:55.722311       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:15:55.754790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1102 14:15:55.755111       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1102 14:15:55.755341       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-786183_819f5fea-ffa7-4390-9add-a6f380bf6ae6!
	I1102 14:15:55.756288       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f08a39b4-71c9-422d-9d61-86036126fe6f", APIVersion:"v1", ResourceVersion:"428", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-786183_819f5fea-ffa7-4390-9add-a6f380bf6ae6 became leader
	W1102 14:15:55.783055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:15:55.792030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1102 14:15:55.855840       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-786183_819f5fea-ffa7-4390-9add-a6f380bf6ae6!
	W1102 14:15:57.795548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:15:57.800445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:15:59.803240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:15:59.808087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:16:01.811333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:16:01.815660       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:16:03.818922       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:16:03.823935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:16:05.829863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:16:05.837341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:16:07.841461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:16:07.846885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-786183 -n default-k8s-diff-port-786183
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-786183 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (8.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-955646 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p embed-certs-955646 --alsologtostderr -v=1: exit status 80 (2.323117612s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-955646 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 14:16:28.273719  497436 out.go:360] Setting OutFile to fd 1 ...
	I1102 14:16:28.273939  497436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:16:28.273971  497436 out.go:374] Setting ErrFile to fd 2...
	I1102 14:16:28.273991  497436 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:16:28.274266  497436 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 14:16:28.274573  497436 out.go:368] Setting JSON to false
	I1102 14:16:28.274646  497436 mustload.go:66] Loading cluster: embed-certs-955646
	I1102 14:16:28.275071  497436 config.go:182] Loaded profile config "embed-certs-955646": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:16:28.275616  497436 cli_runner.go:164] Run: docker container inspect embed-certs-955646 --format={{.State.Status}}
	I1102 14:16:28.304395  497436 host.go:66] Checking if "embed-certs-955646" exists ...
	I1102 14:16:28.304748  497436 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:16:28.397983  497436 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:63 SystemTime:2025-11-02 14:16:28.387912662 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:16:28.398828  497436 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-955646 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1102 14:16:28.402451  497436 out.go:179] * Pausing node embed-certs-955646 ... 
	I1102 14:16:28.407205  497436 host.go:66] Checking if "embed-certs-955646" exists ...
	I1102 14:16:28.407548  497436 ssh_runner.go:195] Run: systemctl --version
	I1102 14:16:28.407591  497436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-955646
	I1102 14:16:28.429298  497436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33451 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/embed-certs-955646/id_rsa Username:docker}
	I1102 14:16:28.542023  497436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 14:16:28.566135  497436 pause.go:52] kubelet running: true
	I1102 14:16:28.566199  497436 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1102 14:16:28.901767  497436 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1102 14:16:28.901846  497436 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1102 14:16:29.007953  497436 cri.go:89] found id: "6da0839e1d2999ea3f428a5acfe5a837f84636d51064907eb29cfe7f2701f8b4"
	I1102 14:16:29.008042  497436 cri.go:89] found id: "00f450128bd23b7b717ccc9034de35182d4ca2b29f00a757b513c4ef9dae1e76"
	I1102 14:16:29.008066  497436 cri.go:89] found id: "e62f6bc1f097c21e3482b5659534d25d050cbbe3ad2fc4ee473624d94c2098dd"
	I1102 14:16:29.008101  497436 cri.go:89] found id: "c8dd4cc06305b2c6b07d015128b0d599ee5422ea6ff0b80a3124fbb4256c3cbe"
	I1102 14:16:29.008124  497436 cri.go:89] found id: "a67c468c1c763f272b0a9c52725437da784062dfc833ddc965b3fbeb8cca238c"
	I1102 14:16:29.008144  497436 cri.go:89] found id: "f18821cbf7e95d9a372afcb877b644585ef683291b4420587127c19d9c80ed5d"
	I1102 14:16:29.008165  497436 cri.go:89] found id: "6eca59e1e67071a7fc5a83a34cccd31a907855a85d0659f4b0148c6382cc8beb"
	I1102 14:16:29.008200  497436 cri.go:89] found id: "cf67be7dd17d48724284cdf38b21f101a7c4398491b1bd72bc00ab1299492eff"
	I1102 14:16:29.008217  497436 cri.go:89] found id: "b2cda95c0fa73867332aa42c9cd6aad92c60f000d6837089bac2ad63937e9752"
	I1102 14:16:29.008244  497436 cri.go:89] found id: "f13f3bd50aa03124042acc9a5370ba88dc79123989a79b756e5c45dc349d6017"
	I1102 14:16:29.008280  497436 cri.go:89] found id: "9bc5d7391685e8a5cd00f7fa23ac16536bf7625564882416c6a722715462fbb0"
	I1102 14:16:29.008297  497436 cri.go:89] found id: ""
	I1102 14:16:29.008380  497436 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 14:16:29.022763  497436 retry.go:31] will retry after 153.739382ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T14:16:29Z" level=error msg="open /run/runc: no such file or directory"
	I1102 14:16:29.177177  497436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 14:16:29.192855  497436 pause.go:52] kubelet running: false
	I1102 14:16:29.192921  497436 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1102 14:16:29.418717  497436 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1102 14:16:29.418814  497436 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1102 14:16:29.511638  497436 cri.go:89] found id: "6da0839e1d2999ea3f428a5acfe5a837f84636d51064907eb29cfe7f2701f8b4"
	I1102 14:16:29.511664  497436 cri.go:89] found id: "00f450128bd23b7b717ccc9034de35182d4ca2b29f00a757b513c4ef9dae1e76"
	I1102 14:16:29.511669  497436 cri.go:89] found id: "e62f6bc1f097c21e3482b5659534d25d050cbbe3ad2fc4ee473624d94c2098dd"
	I1102 14:16:29.511673  497436 cri.go:89] found id: "c8dd4cc06305b2c6b07d015128b0d599ee5422ea6ff0b80a3124fbb4256c3cbe"
	I1102 14:16:29.511676  497436 cri.go:89] found id: "a67c468c1c763f272b0a9c52725437da784062dfc833ddc965b3fbeb8cca238c"
	I1102 14:16:29.511679  497436 cri.go:89] found id: "f18821cbf7e95d9a372afcb877b644585ef683291b4420587127c19d9c80ed5d"
	I1102 14:16:29.511683  497436 cri.go:89] found id: "6eca59e1e67071a7fc5a83a34cccd31a907855a85d0659f4b0148c6382cc8beb"
	I1102 14:16:29.511686  497436 cri.go:89] found id: "cf67be7dd17d48724284cdf38b21f101a7c4398491b1bd72bc00ab1299492eff"
	I1102 14:16:29.511689  497436 cri.go:89] found id: "b2cda95c0fa73867332aa42c9cd6aad92c60f000d6837089bac2ad63937e9752"
	I1102 14:16:29.511696  497436 cri.go:89] found id: "f13f3bd50aa03124042acc9a5370ba88dc79123989a79b756e5c45dc349d6017"
	I1102 14:16:29.511703  497436 cri.go:89] found id: "9bc5d7391685e8a5cd00f7fa23ac16536bf7625564882416c6a722715462fbb0"
	I1102 14:16:29.511706  497436 cri.go:89] found id: ""
	I1102 14:16:29.511752  497436 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 14:16:29.523570  497436 retry.go:31] will retry after 465.435466ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T14:16:29Z" level=error msg="open /run/runc: no such file or directory"
	I1102 14:16:29.989186  497436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 14:16:30.007768  497436 pause.go:52] kubelet running: false
	I1102 14:16:30.007839  497436 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1102 14:16:30.313083  497436 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1102 14:16:30.313176  497436 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1102 14:16:30.461744  497436 cri.go:89] found id: "6da0839e1d2999ea3f428a5acfe5a837f84636d51064907eb29cfe7f2701f8b4"
	I1102 14:16:30.461770  497436 cri.go:89] found id: "00f450128bd23b7b717ccc9034de35182d4ca2b29f00a757b513c4ef9dae1e76"
	I1102 14:16:30.461776  497436 cri.go:89] found id: "e62f6bc1f097c21e3482b5659534d25d050cbbe3ad2fc4ee473624d94c2098dd"
	I1102 14:16:30.461780  497436 cri.go:89] found id: "c8dd4cc06305b2c6b07d015128b0d599ee5422ea6ff0b80a3124fbb4256c3cbe"
	I1102 14:16:30.461797  497436 cri.go:89] found id: "a67c468c1c763f272b0a9c52725437da784062dfc833ddc965b3fbeb8cca238c"
	I1102 14:16:30.461801  497436 cri.go:89] found id: "f18821cbf7e95d9a372afcb877b644585ef683291b4420587127c19d9c80ed5d"
	I1102 14:16:30.461804  497436 cri.go:89] found id: "6eca59e1e67071a7fc5a83a34cccd31a907855a85d0659f4b0148c6382cc8beb"
	I1102 14:16:30.461808  497436 cri.go:89] found id: "cf67be7dd17d48724284cdf38b21f101a7c4398491b1bd72bc00ab1299492eff"
	I1102 14:16:30.461811  497436 cri.go:89] found id: "b2cda95c0fa73867332aa42c9cd6aad92c60f000d6837089bac2ad63937e9752"
	I1102 14:16:30.461817  497436 cri.go:89] found id: "f13f3bd50aa03124042acc9a5370ba88dc79123989a79b756e5c45dc349d6017"
	I1102 14:16:30.461823  497436 cri.go:89] found id: "9bc5d7391685e8a5cd00f7fa23ac16536bf7625564882416c6a722715462fbb0"
	I1102 14:16:30.461826  497436 cri.go:89] found id: ""
	I1102 14:16:30.461876  497436 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 14:16:30.486975  497436 out.go:203] 
	W1102 14:16:30.490309  497436 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T14:16:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T14:16:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 14:16:30.490338  497436 out.go:285] * 
	* 
	W1102 14:16:30.503696  497436 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 14:16:30.508107  497436 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p embed-certs-955646 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-955646
helpers_test.go:243: (dbg) docker inspect embed-certs-955646:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "30c758ef671a78927180fb97bde8b1a9a094e365398db33e109ee628c96e6553",
	        "Created": "2025-11-02T14:13:39.788499711Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 493513,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-02T14:15:23.189624529Z",
	            "FinishedAt": "2025-11-02T14:15:22.382373342Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/30c758ef671a78927180fb97bde8b1a9a094e365398db33e109ee628c96e6553/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/30c758ef671a78927180fb97bde8b1a9a094e365398db33e109ee628c96e6553/hostname",
	        "HostsPath": "/var/lib/docker/containers/30c758ef671a78927180fb97bde8b1a9a094e365398db33e109ee628c96e6553/hosts",
	        "LogPath": "/var/lib/docker/containers/30c758ef671a78927180fb97bde8b1a9a094e365398db33e109ee628c96e6553/30c758ef671a78927180fb97bde8b1a9a094e365398db33e109ee628c96e6553-json.log",
	        "Name": "/embed-certs-955646",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-955646:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-955646",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "30c758ef671a78927180fb97bde8b1a9a094e365398db33e109ee628c96e6553",
	                "LowerDir": "/var/lib/docker/overlay2/8c504e43823d68c8b3c159a922e06da89536ef8a80c163fcf27d6116fa985aa4-init/diff:/var/lib/docker/overlay2/53058afe6acd7391f639f551e07f446f6a39a991b699aea18507cb21a1652e97/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8c504e43823d68c8b3c159a922e06da89536ef8a80c163fcf27d6116fa985aa4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8c504e43823d68c8b3c159a922e06da89536ef8a80c163fcf27d6116fa985aa4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8c504e43823d68c8b3c159a922e06da89536ef8a80c163fcf27d6116fa985aa4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-955646",
	                "Source": "/var/lib/docker/volumes/embed-certs-955646/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-955646",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-955646",
	                "name.minikube.sigs.k8s.io": "embed-certs-955646",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1dce9e91e08b5bd594afc61784d9232ca04e56e0854ba18d8060d020ff1d7a8d",
	            "SandboxKey": "/var/run/docker/netns/1dce9e91e08b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-955646": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:3d:5a:15:68:18",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d85ba2fbd0cbee1971516307c8078f5176011d8f2e54e2718a749b7827caba3c",
	                    "EndpointID": "d16786a26dad3e180e8ab0b8e2a1de12c4b4150e2c47c812126b20ed99a80c71",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-955646",
	                        "30c758ef671a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-955646 -n embed-certs-955646
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-955646 -n embed-certs-955646: exit status 2 (511.34514ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-955646 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-955646 logs -n 25: (2.160830561s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p old-k8s-version-873713                                                                                                                                                │ old-k8s-version-873713       │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │ 02 Nov 25 14:11 UTC │
	│ delete  │ -p old-k8s-version-873713                                                                                                                                                │ old-k8s-version-873713       │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │ 02 Nov 25 14:11 UTC │
	│ start   │ -p no-preload-150469 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │ 02 Nov 25 14:12 UTC │
	│ addons  │ enable metrics-server -p no-preload-150469 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │                     │
	│ stop    │ -p no-preload-150469 --alsologtostderr -v=3                                                                                                                              │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │ 02 Nov 25 14:13 UTC │
	│ addons  │ enable dashboard -p no-preload-150469 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │ 02 Nov 25 14:13 UTC │
	│ start   │ -p no-preload-150469 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │ 02 Nov 25 14:14 UTC │
	│ delete  │ -p cert-expiration-114321                                                                                                                                                │ cert-expiration-114321       │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │ 02 Nov 25 14:13 UTC │
	│ start   │ -p embed-certs-955646 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │ 02 Nov 25 14:14 UTC │
	│ image   │ no-preload-150469 image list --format=json                                                                                                                               │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │ 02 Nov 25 14:14 UTC │
	│ pause   │ -p no-preload-150469 --alsologtostderr -v=1                                                                                                                              │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │                     │
	│ delete  │ -p no-preload-150469                                                                                                                                                     │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │ 02 Nov 25 14:14 UTC │
	│ delete  │ -p no-preload-150469                                                                                                                                                     │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │ 02 Nov 25 14:14 UTC │
	│ delete  │ -p disable-driver-mounts-720030                                                                                                                                          │ disable-driver-mounts-720030 │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │ 02 Nov 25 14:14 UTC │
	│ start   │ -p default-k8s-diff-port-786183 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │ 02 Nov 25 14:15 UTC │
	│ addons  │ enable metrics-server -p embed-certs-955646 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:15 UTC │                     │
	│ stop    │ -p embed-certs-955646 --alsologtostderr -v=3                                                                                                                             │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:15 UTC │ 02 Nov 25 14:15 UTC │
	│ addons  │ enable dashboard -p embed-certs-955646 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:15 UTC │ 02 Nov 25 14:15 UTC │
	│ start   │ -p embed-certs-955646 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:15 UTC │ 02 Nov 25 14:16 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-786183 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-786183 --alsologtostderr -v=3                                                                                                                   │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │ 02 Nov 25 14:16 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-786183 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │ 02 Nov 25 14:16 UTC │
	│ start   │ -p default-k8s-diff-port-786183 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │                     │
	│ image   │ embed-certs-955646 image list --format=json                                                                                                                              │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │ 02 Nov 25 14:16 UTC │
	│ pause   │ -p embed-certs-955646 --alsologtostderr -v=1                                                                                                                             │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 14:16:21
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 14:16:21.873195  496485 out.go:360] Setting OutFile to fd 1 ...
	I1102 14:16:21.873358  496485 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:16:21.873371  496485 out.go:374] Setting ErrFile to fd 2...
	I1102 14:16:21.873377  496485 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:16:21.873664  496485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 14:16:21.874070  496485 out.go:368] Setting JSON to false
	I1102 14:16:21.875115  496485 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10734,"bootTime":1762082248,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1102 14:16:21.875185  496485 start.go:143] virtualization:  
	I1102 14:16:21.880130  496485 out.go:179] * [default-k8s-diff-port-786183] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1102 14:16:21.883242  496485 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 14:16:21.883329  496485 notify.go:221] Checking for updates...
	I1102 14:16:21.888991  496485 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 14:16:21.891996  496485 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-293314/kubeconfig
	I1102 14:16:21.894990  496485 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-293314/.minikube
	I1102 14:16:21.897745  496485 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1102 14:16:21.900682  496485 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 14:16:21.904143  496485 config.go:182] Loaded profile config "default-k8s-diff-port-786183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:16:21.904759  496485 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 14:16:21.935721  496485 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1102 14:16:21.935825  496485 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:16:21.990509  496485 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-02 14:16:21.980584668 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:16:21.990649  496485 docker.go:319] overlay module found
	I1102 14:16:21.993827  496485 out.go:179] * Using the docker driver based on existing profile
	I1102 14:16:21.996835  496485 start.go:309] selected driver: docker
	I1102 14:16:21.996929  496485 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-786183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-786183 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 14:16:21.997055  496485 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 14:16:21.997774  496485 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:16:22.061036  496485 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-02 14:16:22.05143499 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:16:22.061397  496485 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 14:16:22.061434  496485 cni.go:84] Creating CNI manager for ""
	I1102 14:16:22.061490  496485 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 14:16:22.061532  496485 start.go:353] cluster config:
	{Name:default-k8s-diff-port-786183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-786183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 14:16:22.064761  496485 out.go:179] * Starting "default-k8s-diff-port-786183" primary control-plane node in "default-k8s-diff-port-786183" cluster
	I1102 14:16:22.067748  496485 cache.go:124] Beginning downloading kic base image for docker with crio
	I1102 14:16:22.070825  496485 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1102 14:16:22.073784  496485 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 14:16:22.073864  496485 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1102 14:16:22.073875  496485 cache.go:59] Caching tarball of preloaded images
	I1102 14:16:22.073872  496485 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1102 14:16:22.073962  496485 preload.go:233] Found /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1102 14:16:22.073972  496485 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1102 14:16:22.074087  496485 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/config.json ...
	I1102 14:16:22.094336  496485 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1102 14:16:22.094365  496485 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1102 14:16:22.094474  496485 cache.go:233] Successfully downloaded all kic artifacts
	I1102 14:16:22.094506  496485 start.go:360] acquireMachinesLock for default-k8s-diff-port-786183: {Name:mk74a3791f8141b365a89e0370ddc0301da720d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 14:16:22.094582  496485 start.go:364] duration metric: took 46.696µs to acquireMachinesLock for "default-k8s-diff-port-786183"
	I1102 14:16:22.094671  496485 start.go:96] Skipping create...Using existing machine configuration
	I1102 14:16:22.094686  496485 fix.go:54] fixHost starting: 
	I1102 14:16:22.094978  496485 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-786183 --format={{.State.Status}}
	I1102 14:16:22.112060  496485 fix.go:112] recreateIfNeeded on default-k8s-diff-port-786183: state=Stopped err=<nil>
	W1102 14:16:22.112091  496485 fix.go:138] unexpected machine state, will restart: <nil>
	I1102 14:16:22.115387  496485 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-786183" ...
	I1102 14:16:22.115481  496485 cli_runner.go:164] Run: docker start default-k8s-diff-port-786183
	I1102 14:16:22.397170  496485 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-786183 --format={{.State.Status}}
	I1102 14:16:22.419938  496485 kic.go:430] container "default-k8s-diff-port-786183" state is running.
	I1102 14:16:22.420332  496485 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-786183
	I1102 14:16:22.445769  496485 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/config.json ...
	I1102 14:16:22.446008  496485 machine.go:94] provisionDockerMachine start ...
	I1102 14:16:22.446073  496485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-786183
	I1102 14:16:22.477092  496485 main.go:143] libmachine: Using SSH client type: native
	I1102 14:16:22.477414  496485 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33456 <nil> <nil>}
	I1102 14:16:22.477427  496485 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 14:16:22.481066  496485 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1102 14:16:25.634250  496485 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-786183
	
	I1102 14:16:25.634284  496485 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-786183"
	I1102 14:16:25.634348  496485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-786183
	I1102 14:16:25.652952  496485 main.go:143] libmachine: Using SSH client type: native
	I1102 14:16:25.653273  496485 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33456 <nil> <nil>}
	I1102 14:16:25.653291  496485 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-786183 && echo "default-k8s-diff-port-786183" | sudo tee /etc/hostname
	I1102 14:16:25.812077  496485 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-786183
	
	I1102 14:16:25.812161  496485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-786183
	I1102 14:16:25.830411  496485 main.go:143] libmachine: Using SSH client type: native
	I1102 14:16:25.830770  496485 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33456 <nil> <nil>}
	I1102 14:16:25.830796  496485 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-786183' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-786183/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-786183' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 14:16:25.978930  496485 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 14:16:25.979001  496485 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-293314/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-293314/.minikube}
	I1102 14:16:25.979028  496485 ubuntu.go:190] setting up certificates
	I1102 14:16:25.979042  496485 provision.go:84] configureAuth start
	I1102 14:16:25.979120  496485 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-786183
	I1102 14:16:25.996587  496485 provision.go:143] copyHostCerts
	I1102 14:16:25.996741  496485 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem, removing ...
	I1102 14:16:25.996769  496485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem
	I1102 14:16:25.996851  496485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem (1082 bytes)
	I1102 14:16:25.997026  496485 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem, removing ...
	I1102 14:16:25.997055  496485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem
	I1102 14:16:25.997098  496485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem (1123 bytes)
	I1102 14:16:25.997222  496485 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem, removing ...
	I1102 14:16:25.997234  496485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem
	I1102 14:16:25.997271  496485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem (1675 bytes)
	I1102 14:16:25.997384  496485 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-786183 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-786183 localhost minikube]
	I1102 14:16:26.641323  496485 provision.go:177] copyRemoteCerts
	I1102 14:16:26.641426  496485 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 14:16:26.641493  496485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-786183
	I1102 14:16:26.659009  496485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/default-k8s-diff-port-786183/id_rsa Username:docker}
	I1102 14:16:26.766331  496485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1102 14:16:26.783691  496485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1102 14:16:26.800779  496485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1102 14:16:26.817392  496485 provision.go:87] duration metric: took 838.326052ms to configureAuth
	I1102 14:16:26.817428  496485 ubuntu.go:206] setting minikube options for container-runtime
	I1102 14:16:26.817611  496485 config.go:182] Loaded profile config "default-k8s-diff-port-786183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:16:26.817711  496485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-786183
	I1102 14:16:26.834756  496485 main.go:143] libmachine: Using SSH client type: native
	I1102 14:16:26.835082  496485 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33456 <nil> <nil>}
	I1102 14:16:26.835102  496485 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1102 14:16:27.157533  496485 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1102 14:16:27.157558  496485 machine.go:97] duration metric: took 4.711532586s to provisionDockerMachine
	I1102 14:16:27.157568  496485 start.go:293] postStartSetup for "default-k8s-diff-port-786183" (driver="docker")
	I1102 14:16:27.157579  496485 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1102 14:16:27.157652  496485 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1102 14:16:27.157742  496485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-786183
	I1102 14:16:27.179264  496485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/default-k8s-diff-port-786183/id_rsa Username:docker}
	I1102 14:16:27.286322  496485 ssh_runner.go:195] Run: cat /etc/os-release
	I1102 14:16:27.289468  496485 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1102 14:16:27.289497  496485 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1102 14:16:27.289508  496485 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-293314/.minikube/addons for local assets ...
	I1102 14:16:27.289561  496485 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-293314/.minikube/files for local assets ...
	I1102 14:16:27.289651  496485 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem -> 2951742.pem in /etc/ssl/certs
	I1102 14:16:27.289757  496485 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1102 14:16:27.297080  496485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem --> /etc/ssl/certs/2951742.pem (1708 bytes)
	I1102 14:16:27.314458  496485 start.go:296] duration metric: took 156.875112ms for postStartSetup
	I1102 14:16:27.314534  496485 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 14:16:27.314609  496485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-786183
	I1102 14:16:27.332520  496485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/default-k8s-diff-port-786183/id_rsa Username:docker}
	I1102 14:16:27.431770  496485 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1102 14:16:27.436621  496485 fix.go:56] duration metric: took 5.341927747s for fixHost
	I1102 14:16:27.436646  496485 start.go:83] releasing machines lock for "default-k8s-diff-port-786183", held for 5.342036195s
	I1102 14:16:27.436740  496485 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-786183
	I1102 14:16:27.452995  496485 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem (1338 bytes)
	W1102 14:16:27.453062  496485 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174_empty.pem, impossibly tiny 0 bytes
	I1102 14:16:27.453086  496485 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem (1679 bytes)
	I1102 14:16:27.453119  496485 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem (1082 bytes)
	I1102 14:16:27.453147  496485 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem (1123 bytes)
	I1102 14:16:27.453174  496485 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem (1675 bytes)
	I1102 14:16:27.453222  496485 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem (1708 bytes)
	I1102 14:16:27.453290  496485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 14:16:27.453346  496485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-786183
	I1102 14:16:27.469862  496485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/default-k8s-diff-port-786183/id_rsa Username:docker}
	I1102 14:16:27.583147  496485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem --> /usr/share/ca-certificates/295174.pem (1338 bytes)
	I1102 14:16:27.604825  496485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem --> /usr/share/ca-certificates/2951742.pem (1708 bytes)
	I1102 14:16:27.629102  496485 ssh_runner.go:195] Run: openssl version
	I1102 14:16:27.635415  496485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951742.pem && ln -fs /usr/share/ca-certificates/2951742.pem /etc/ssl/certs/2951742.pem"
	I1102 14:16:27.643572  496485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951742.pem
	I1102 14:16:27.647230  496485 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 13:20 /usr/share/ca-certificates/2951742.pem
	I1102 14:16:27.647293  496485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951742.pem
	I1102 14:16:27.688624  496485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951742.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 14:16:27.696410  496485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 14:16:27.704445  496485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:16:27.708179  496485 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 13:13 /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:16:27.708285  496485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:16:27.749811  496485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 14:16:27.758985  496485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295174.pem && ln -fs /usr/share/ca-certificates/295174.pem /etc/ssl/certs/295174.pem"
	I1102 14:16:27.770257  496485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295174.pem
	I1102 14:16:27.774454  496485 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 13:20 /usr/share/ca-certificates/295174.pem
	I1102 14:16:27.774516  496485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295174.pem
	I1102 14:16:27.816820  496485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295174.pem /etc/ssl/certs/51391683.0"
	I1102 14:16:27.825561  496485 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1102 14:16:27.829262  496485 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1102 14:16:27.832845  496485 ssh_runner.go:195] Run: cat /version.json
	I1102 14:16:27.832917  496485 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1102 14:16:27.930935  496485 ssh_runner.go:195] Run: systemctl --version
	I1102 14:16:27.939595  496485 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1102 14:16:27.993963  496485 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1102 14:16:27.999290  496485 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1102 14:16:27.999373  496485 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1102 14:16:28.010226  496485 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1102 14:16:28.010254  496485 start.go:496] detecting cgroup driver to use...
	I1102 14:16:28.010291  496485 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1102 14:16:28.010341  496485 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1102 14:16:28.034864  496485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1102 14:16:28.052023  496485 docker.go:218] disabling cri-docker service (if available) ...
	I1102 14:16:28.052081  496485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1102 14:16:28.069486  496485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1102 14:16:28.085165  496485 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1102 14:16:28.249546  496485 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1102 14:16:28.407544  496485 docker.go:234] disabling docker service ...
	I1102 14:16:28.407594  496485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1102 14:16:28.427617  496485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1102 14:16:28.441655  496485 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1102 14:16:28.596283  496485 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1102 14:16:28.759340  496485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1102 14:16:28.773992  496485 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1102 14:16:28.792825  496485 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1102 14:16:28.792889  496485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:16:28.802698  496485 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1102 14:16:28.802777  496485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:16:28.814119  496485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:16:28.830587  496485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:16:28.840526  496485 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1102 14:16:28.849641  496485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:16:28.859697  496485 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:16:28.869822  496485 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:16:28.879865  496485 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1102 14:16:28.887707  496485 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1102 14:16:28.895173  496485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 14:16:29.064493  496485 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1102 14:16:29.192748  496485 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1102 14:16:29.192858  496485 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1102 14:16:29.203058  496485 start.go:564] Will wait 60s for crictl version
	I1102 14:16:29.203179  496485 ssh_runner.go:195] Run: which crictl
	I1102 14:16:29.206818  496485 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1102 14:16:29.267957  496485 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1102 14:16:29.268052  496485 ssh_runner.go:195] Run: crio --version
	I1102 14:16:29.323375  496485 ssh_runner.go:195] Run: crio --version
	I1102 14:16:29.365301  496485 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1102 14:16:29.368165  496485 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-786183 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 14:16:29.384352  496485 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1102 14:16:29.388847  496485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 14:16:29.398319  496485 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-786183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-786183 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1102 14:16:29.398443  496485 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 14:16:29.398501  496485 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 14:16:29.441311  496485 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 14:16:29.441335  496485 crio.go:433] Images already preloaded, skipping extraction
	I1102 14:16:29.441399  496485 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 14:16:29.472929  496485 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 14:16:29.472954  496485 cache_images.go:86] Images are preloaded, skipping loading
	I1102 14:16:29.472963  496485 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1102 14:16:29.473112  496485 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-786183 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-786183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1102 14:16:29.473216  496485 ssh_runner.go:195] Run: crio config
	I1102 14:16:29.537085  496485 cni.go:84] Creating CNI manager for ""
	I1102 14:16:29.537108  496485 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 14:16:29.537119  496485 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1102 14:16:29.537143  496485 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-786183 NodeName:default-k8s-diff-port-786183 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1102 14:16:29.537277  496485 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-786183"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1102 14:16:29.537357  496485 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1102 14:16:29.545057  496485 binaries.go:44] Found k8s binaries, skipping transfer
	I1102 14:16:29.545175  496485 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1102 14:16:29.552760  496485 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1102 14:16:29.565442  496485 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1102 14:16:29.579239  496485 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1102 14:16:29.592230  496485 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1102 14:16:29.595971  496485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 14:16:29.605911  496485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 14:16:29.723784  496485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 14:16:29.739500  496485 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183 for IP: 192.168.76.2
	I1102 14:16:29.739560  496485 certs.go:195] generating shared ca certs ...
	I1102 14:16:29.739590  496485 certs.go:227] acquiring lock for ca certs: {Name:mkead50075949a3cdc798f9c0149a2bc2638cbbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:16:29.739742  496485 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-293314/.minikube/ca.key
	I1102 14:16:29.739825  496485 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.key
	I1102 14:16:29.739850  496485 certs.go:257] generating profile certs ...
	I1102 14:16:29.739977  496485 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/client.key
	I1102 14:16:29.740083  496485 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/apiserver.key.995a17bc
	I1102 14:16:29.740161  496485 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/proxy-client.key
	I1102 14:16:29.740304  496485 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem (1338 bytes)
	W1102 14:16:29.740366  496485 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174_empty.pem, impossibly tiny 0 bytes
	I1102 14:16:29.740395  496485 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem (1679 bytes)
	I1102 14:16:29.740450  496485 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem (1082 bytes)
	I1102 14:16:29.740506  496485 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem (1123 bytes)
	I1102 14:16:29.740560  496485 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem (1675 bytes)
	I1102 14:16:29.740631  496485 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem (1708 bytes)
	I1102 14:16:29.741246  496485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1102 14:16:29.765032  496485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1102 14:16:29.785295  496485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1102 14:16:29.806529  496485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1102 14:16:29.828579  496485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1102 14:16:29.856745  496485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1102 14:16:29.879089  496485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1102 14:16:29.902079  496485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1102 14:16:29.931054  496485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem --> /usr/share/ca-certificates/295174.pem (1338 bytes)
	I1102 14:16:29.953137  496485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem --> /usr/share/ca-certificates/2951742.pem (1708 bytes)
	I1102 14:16:29.976356  496485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 14:16:30.015696  496485 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1102 14:16:30.061167  496485 ssh_runner.go:195] Run: openssl version
	I1102 14:16:30.078428  496485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295174.pem && ln -fs /usr/share/ca-certificates/295174.pem /etc/ssl/certs/295174.pem"
	I1102 14:16:30.097607  496485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295174.pem
	I1102 14:16:30.102597  496485 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 13:20 /usr/share/ca-certificates/295174.pem
	I1102 14:16:30.102741  496485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295174.pem
	I1102 14:16:30.160284  496485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295174.pem /etc/ssl/certs/51391683.0"
	I1102 14:16:30.169480  496485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951742.pem && ln -fs /usr/share/ca-certificates/2951742.pem /etc/ssl/certs/2951742.pem"
	I1102 14:16:30.186861  496485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951742.pem
	I1102 14:16:30.192125  496485 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 13:20 /usr/share/ca-certificates/2951742.pem
	I1102 14:16:30.192245  496485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951742.pem
	I1102 14:16:30.236731  496485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951742.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 14:16:30.245305  496485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 14:16:30.254547  496485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:16:30.259337  496485 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 13:13 /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:16:30.259454  496485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:16:30.304551  496485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 14:16:30.313559  496485 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1102 14:16:30.324060  496485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1102 14:16:30.436308  496485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1102 14:16:30.501977  496485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1102 14:16:30.584678  496485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1102 14:16:30.743288  496485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1102 14:16:30.845787  496485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1102 14:16:30.959675  496485 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-786183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-786183 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 14:16:30.959764  496485 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 14:16:30.959830  496485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 14:16:31.049567  496485 cri.go:89] found id: "f6bef86f73f59250e354d0fdd9e49760329ba2e76d5a2c9140645b949b671c4d"
	I1102 14:16:31.049586  496485 cri.go:89] found id: "6d9b69e73df509198b2e29494a4484507c8a14cccb6a2b6302b756a3c2183899"
	I1102 14:16:31.049599  496485 cri.go:89] found id: "d53a1eafeb3bc7e2100e0bcf284f029edbffd71be60582127cbabe95881a86ac"
	I1102 14:16:31.049604  496485 cri.go:89] found id: "312cee2bec817cdd2e35981ea4410dfbe7dc6c1e95635e12a5f8648c6f301ff1"
	I1102 14:16:31.049608  496485 cri.go:89] found id: ""
	I1102 14:16:31.049653  496485 ssh_runner.go:195] Run: sudo runc list -f json
	W1102 14:16:31.108813  496485 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T14:16:31Z" level=error msg="open /run/runc: no such file or directory"
	I1102 14:16:31.108905  496485 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1102 14:16:31.128233  496485 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1102 14:16:31.128251  496485 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1102 14:16:31.128313  496485 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1102 14:16:31.146420  496485 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1102 14:16:31.147299  496485 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-786183" does not appear in /home/jenkins/minikube-integration/21808-293314/kubeconfig
	I1102 14:16:31.147811  496485 kubeconfig.go:62] /home/jenkins/minikube-integration/21808-293314/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-786183" cluster setting kubeconfig missing "default-k8s-diff-port-786183" context setting]
	I1102 14:16:31.148555  496485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/kubeconfig: {Name:mke5a65554da8fc0fd6a2ea60bed899d5b38ce09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:16:31.150915  496485 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1102 14:16:31.160833  496485 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1102 14:16:31.160866  496485 kubeadm.go:602] duration metric: took 32.608832ms to restartPrimaryControlPlane
	I1102 14:16:31.160875  496485 kubeadm.go:403] duration metric: took 201.209672ms to StartCluster
	I1102 14:16:31.160890  496485 settings.go:142] acquiring lock: {Name:mk95f66b3b15e63f58f8c9085c1ffe67cc396dc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:16:31.160968  496485 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-293314/kubeconfig
	I1102 14:16:31.162458  496485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/kubeconfig: {Name:mke5a65554da8fc0fd6a2ea60bed899d5b38ce09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:16:31.163236  496485 config.go:182] Loaded profile config "default-k8s-diff-port-786183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:16:31.163021  496485 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 14:16:31.163352  496485 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1102 14:16:31.163593  496485 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-786183"
	I1102 14:16:31.163612  496485 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-786183"
	W1102 14:16:31.163618  496485 addons.go:248] addon storage-provisioner should already be in state true
	I1102 14:16:31.163647  496485 host.go:66] Checking if "default-k8s-diff-port-786183" exists ...
	I1102 14:16:31.164146  496485 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-786183 --format={{.State.Status}}
	I1102 14:16:31.164312  496485 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-786183"
	I1102 14:16:31.164330  496485 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-786183"
	W1102 14:16:31.164336  496485 addons.go:248] addon dashboard should already be in state true
	I1102 14:16:31.164385  496485 host.go:66] Checking if "default-k8s-diff-port-786183" exists ...
	I1102 14:16:31.164827  496485 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-786183 --format={{.State.Status}}
	I1102 14:16:31.165238  496485 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-786183"
	I1102 14:16:31.165293  496485 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-786183"
	I1102 14:16:31.165635  496485 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-786183 --format={{.State.Status}}
	I1102 14:16:31.167192  496485 out.go:179] * Verifying Kubernetes components...
	
	
	==> CRI-O <==
	Nov 02 14:16:18 embed-certs-955646 crio[684]: time="2025-11-02T14:16:18.125663332Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:16:18 embed-certs-955646 crio[684]: time="2025-11-02T14:16:18.128672059Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:16:18 embed-certs-955646 crio[684]: time="2025-11-02T14:16:18.12870598Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 14:16:18 embed-certs-955646 crio[684]: time="2025-11-02T14:16:18.128724704Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:16:18 embed-certs-955646 crio[684]: time="2025-11-02T14:16:18.131797596Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:16:18 embed-certs-955646 crio[684]: time="2025-11-02T14:16:18.131833264Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 14:16:18 embed-certs-955646 crio[684]: time="2025-11-02T14:16:18.131855788Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:16:18 embed-certs-955646 crio[684]: time="2025-11-02T14:16:18.134902317Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:16:18 embed-certs-955646 crio[684]: time="2025-11-02T14:16:18.13493979Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 14:16:18 embed-certs-955646 crio[684]: time="2025-11-02T14:16:18.134965242Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:16:18 embed-certs-955646 crio[684]: time="2025-11-02T14:16:18.138208696Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:16:18 embed-certs-955646 crio[684]: time="2025-11-02T14:16:18.138245472Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 14:16:22 embed-certs-955646 crio[684]: time="2025-11-02T14:16:22.273197913Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a2ddad05-0afb-4ec1-8fd6-d70f8c4d10f4 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:16:22 embed-certs-955646 crio[684]: time="2025-11-02T14:16:22.27439566Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=22f591f9-106b-4e95-b69e-df1c59947274 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:16:22 embed-certs-955646 crio[684]: time="2025-11-02T14:16:22.27529395Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d5qv6/dashboard-metrics-scraper" id=3286d5b3-a7b5-4c17-a8de-bac2b7f422e7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:16:22 embed-certs-955646 crio[684]: time="2025-11-02T14:16:22.275411884Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:16:22 embed-certs-955646 crio[684]: time="2025-11-02T14:16:22.285226571Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:16:22 embed-certs-955646 crio[684]: time="2025-11-02T14:16:22.285846754Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:16:22 embed-certs-955646 crio[684]: time="2025-11-02T14:16:22.308631254Z" level=info msg="Created container f13f3bd50aa03124042acc9a5370ba88dc79123989a79b756e5c45dc349d6017: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d5qv6/dashboard-metrics-scraper" id=3286d5b3-a7b5-4c17-a8de-bac2b7f422e7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:16:22 embed-certs-955646 crio[684]: time="2025-11-02T14:16:22.309817966Z" level=info msg="Starting container: f13f3bd50aa03124042acc9a5370ba88dc79123989a79b756e5c45dc349d6017" id=8365893f-619d-4274-a672-3c0cece90d84 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 14:16:22 embed-certs-955646 crio[684]: time="2025-11-02T14:16:22.31208143Z" level=info msg="Started container" PID=1754 containerID=f13f3bd50aa03124042acc9a5370ba88dc79123989a79b756e5c45dc349d6017 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d5qv6/dashboard-metrics-scraper id=8365893f-619d-4274-a672-3c0cece90d84 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ef616118b55206871ad442bf8fb420c021545ad89a89db85b2bbe5fb568ac7bf
	Nov 02 14:16:22 embed-certs-955646 conmon[1752]: conmon f13f3bd50aa03124042a <ninfo>: container 1754 exited with status 1
	Nov 02 14:16:22 embed-certs-955646 crio[684]: time="2025-11-02T14:16:22.463567207Z" level=info msg="Removing container: ef426e5a083cea762ab338335868092fbefffe917112753aa8640fc71db0b316" id=20541653-9344-43e8-bf11-3ab69149f16a name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 02 14:16:22 embed-certs-955646 crio[684]: time="2025-11-02T14:16:22.479170498Z" level=info msg="Error loading conmon cgroup of container ef426e5a083cea762ab338335868092fbefffe917112753aa8640fc71db0b316: cgroup deleted" id=20541653-9344-43e8-bf11-3ab69149f16a name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 02 14:16:22 embed-certs-955646 crio[684]: time="2025-11-02T14:16:22.484141756Z" level=info msg="Removed container ef426e5a083cea762ab338335868092fbefffe917112753aa8640fc71db0b316: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d5qv6/dashboard-metrics-scraper" id=20541653-9344-43e8-bf11-3ab69149f16a name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	f13f3bd50aa03       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           9 seconds ago        Exited              dashboard-metrics-scraper   3                   ef616118b5520       dashboard-metrics-scraper-6ffb444bf9-d5qv6   kubernetes-dashboard
	6da0839e1d299       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           23 seconds ago       Running             storage-provisioner         2                   7033724a49887       storage-provisioner                          kube-system
	9bc5d7391685e       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   42 seconds ago       Running             kubernetes-dashboard        0                   f448eec3af598       kubernetes-dashboard-855c9754f9-hp5zz        kubernetes-dashboard
	6a9aa1a8dc805       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           54 seconds ago       Running             busybox                     1                   6a247b5449b94       busybox                                      default
	00f450128bd23       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           54 seconds ago       Running             coredns                     1                   859daaed9acbe       coredns-66bc5c9577-h7hk7                     kube-system
	e62f6bc1f097c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           54 seconds ago       Running             kindnet-cni                 1                   0aadb994c0cff       kindnet-fvxzq                                kube-system
	c8dd4cc06305b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           54 seconds ago       Running             kube-proxy                  1                   890fc14d0bdd6       kube-proxy-hg44j                             kube-system
	a67c468c1c763       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           54 seconds ago       Exited              storage-provisioner         1                   7033724a49887       storage-provisioner                          kube-system
	f18821cbf7e95       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   12a676cc95d10       etcd-embed-certs-955646                      kube-system
	6eca59e1e6707       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   5d204b0427660       kube-controller-manager-embed-certs-955646   kube-system
	cf67be7dd17d4       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   d3f511b74c2e9       kube-apiserver-embed-certs-955646            kube-system
	b2cda95c0fa73       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   883ff6970e384       kube-scheduler-embed-certs-955646            kube-system
	
	
	==> coredns [00f450128bd23b7b717ccc9034de35182d4ca2b29f00a757b513c4ef9dae1e76] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43285 - 7107 "HINFO IN 590304181686656473.5239133587992657210. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.046539103s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-955646
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-955646
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a
	                    minikube.k8s.io/name=embed-certs-955646
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_02T14_14_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 02 Nov 2025 14:14:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-955646
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 02 Nov 2025 14:16:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 02 Nov 2025 14:16:07 +0000   Sun, 02 Nov 2025 14:14:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 02 Nov 2025 14:16:07 +0000   Sun, 02 Nov 2025 14:14:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 02 Nov 2025 14:16:07 +0000   Sun, 02 Nov 2025 14:14:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 02 Nov 2025 14:16:07 +0000   Sun, 02 Nov 2025 14:14:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-955646
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                b99c9ad5-2cef-4b58-868b-a11cc5355016
	  Boot ID:                    4c303cf4-c0b6-4c0d-aa1a-d7509feb15e7
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-h7hk7                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m18s
	  kube-system                 etcd-embed-certs-955646                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m22s
	  kube-system                 kindnet-fvxzq                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m18s
	  kube-system                 kube-apiserver-embed-certs-955646             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-controller-manager-embed-certs-955646    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-proxy-hg44j                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m18s
	  kube-system                 kube-scheduler-embed-certs-955646             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-d5qv6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-hp5zz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m17s                  kube-proxy       
	  Normal   Starting                 54s                    kube-proxy       
	  Normal   Starting                 2m31s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m31s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m30s (x8 over 2m30s)  kubelet          Node embed-certs-955646 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m30s (x8 over 2m30s)  kubelet          Node embed-certs-955646 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m30s (x8 over 2m30s)  kubelet          Node embed-certs-955646 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m23s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m23s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m22s                  kubelet          Node embed-certs-955646 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m22s                  kubelet          Node embed-certs-955646 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m22s                  kubelet          Node embed-certs-955646 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m19s                  node-controller  Node embed-certs-955646 event: Registered Node embed-certs-955646 in Controller
	  Normal   NodeReady                96s                    kubelet          Node embed-certs-955646 status is now: NodeReady
	  Normal   Starting                 62s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  62s (x8 over 62s)      kubelet          Node embed-certs-955646 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    62s (x8 over 62s)      kubelet          Node embed-certs-955646 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     62s (x8 over 62s)      kubelet          Node embed-certs-955646 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           53s                    node-controller  Node embed-certs-955646 event: Registered Node embed-certs-955646 in Controller
	
	
	==> dmesg <==
	[Nov 2 13:56] overlayfs: idmapped layers are currently not supported
	[  +3.515963] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:57] overlayfs: idmapped layers are currently not supported
	[ +24.836033] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:58] overlayfs: idmapped layers are currently not supported
	[ +23.362553] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:59] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:01] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:02] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:03] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:04] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:06] overlayfs: idmapped layers are currently not supported
	[ +50.469589] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 2 14:07] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:08] overlayfs: idmapped layers are currently not supported
	[ +11.089512] overlayfs: idmapped layers are currently not supported
	[ +33.821233] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:09] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:10] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:11] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:13] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:14] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:15] overlayfs: idmapped layers are currently not supported
	[ +29.099512] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:16] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [f18821cbf7e95d9a372afcb877b644585ef683291b4420587127c19d9c80ed5d] <==
	{"level":"warn","ts":"2025-11-02T14:15:34.986062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.005379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.021674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.036282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.052552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.067738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.095294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.106038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.123733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.138793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.159516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.172366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.190025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.207468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.226918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.235504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.250519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.268389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.282740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.299931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.315349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.349274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.388022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.396135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.450821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45176","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:16:32 up  2:59,  0 user,  load average: 3.84, 3.45, 2.98
	Linux embed-certs-955646 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e62f6bc1f097c21e3482b5659534d25d050cbbe3ad2fc4ee473624d94c2098dd] <==
	I1102 14:15:37.918087       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1102 14:15:37.918462       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1102 14:15:37.918660       1 main.go:148] setting mtu 1500 for CNI 
	I1102 14:15:37.918708       1 main.go:178] kindnetd IP family: "ipv4"
	I1102 14:15:37.918758       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-02T14:15:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1102 14:15:38.120319       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1102 14:15:38.120349       1 controller.go:381] "Waiting for informer caches to sync"
	I1102 14:15:38.120358       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1102 14:15:38.121058       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1102 14:16:08.121414       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1102 14:16:08.121528       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1102 14:16:08.121610       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1102 14:16:08.121681       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1102 14:16:09.320509       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1102 14:16:09.320543       1 metrics.go:72] Registering metrics
	I1102 14:16:09.320614       1 controller.go:711] "Syncing nftables rules"
	I1102 14:16:18.120898       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1102 14:16:18.120950       1 main.go:301] handling current node
	I1102 14:16:28.122769       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1102 14:16:28.122910       1 main.go:301] handling current node
	
	
	==> kube-apiserver [cf67be7dd17d48724284cdf38b21f101a7c4398491b1bd72bc00ab1299492eff] <==
	I1102 14:15:36.218889       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1102 14:15:36.222688       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1102 14:15:36.228077       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1102 14:15:36.228139       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1102 14:15:36.235201       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1102 14:15:36.235221       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1102 14:15:36.235327       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1102 14:15:36.261159       1 cache.go:39] Caches are synced for autoregister controller
	I1102 14:15:36.278902       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1102 14:15:36.291649       1 default_servicecidr_controller.go:111] Starting kubernetes-service-cidr-controller
	I1102 14:15:36.291678       1 shared_informer.go:349] "Waiting for caches to sync" controller="kubernetes-service-cidr-controller"
	I1102 14:15:36.515139       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1102 14:15:36.515220       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	E1102 14:15:36.527576       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1102 14:15:37.003333       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1102 14:15:37.158057       1 controller.go:667] quota admission added evaluator for: namespaces
	I1102 14:15:37.247495       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1102 14:15:37.318289       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1102 14:15:37.386024       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1102 14:15:37.422081       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1102 14:15:37.594073       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.140.138"}
	I1102 14:15:37.641682       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.151.240"}
	I1102 14:15:40.028308       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1102 14:15:40.111504       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1102 14:15:40.211232       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [6eca59e1e67071a7fc5a83a34cccd31a907855a85d0659f4b0148c6382cc8beb] <==
	I1102 14:15:39.608145       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1102 14:15:39.608193       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1102 14:15:39.613335       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1102 14:15:39.613338       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 14:15:39.619498       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1102 14:15:39.622828       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1102 14:15:39.624012       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1102 14:15:39.636091       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1102 14:15:39.636164       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1102 14:15:39.636199       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1102 14:15:39.636213       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1102 14:15:39.636219       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1102 14:15:39.639667       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1102 14:15:39.645905       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1102 14:15:39.649065       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1102 14:15:39.653968       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1102 14:15:39.654740       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1102 14:15:39.654845       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1102 14:15:39.654873       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1102 14:15:39.655105       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1102 14:15:39.718562       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1102 14:15:39.804139       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 14:15:39.804264       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1102 14:15:39.804296       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1102 14:15:39.819368       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [c8dd4cc06305b2c6b07d015128b0d599ee5422ea6ff0b80a3124fbb4256c3cbe] <==
	I1102 14:15:37.906145       1 server_linux.go:53] "Using iptables proxy"
	I1102 14:15:37.996034       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1102 14:15:38.096500       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1102 14:15:38.096534       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1102 14:15:38.096628       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1102 14:15:38.114589       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 14:15:38.114742       1 server_linux.go:132] "Using iptables Proxier"
	I1102 14:15:38.118549       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1102 14:15:38.119081       1 server.go:527] "Version info" version="v1.34.1"
	I1102 14:15:38.119335       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 14:15:38.120908       1 config.go:200] "Starting service config controller"
	I1102 14:15:38.120973       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1102 14:15:38.121017       1 config.go:106] "Starting endpoint slice config controller"
	I1102 14:15:38.121061       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1102 14:15:38.121152       1 config.go:403] "Starting serviceCIDR config controller"
	I1102 14:15:38.121186       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1102 14:15:38.122870       1 config.go:309] "Starting node config controller"
	I1102 14:15:38.123185       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1102 14:15:38.123232       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1102 14:15:38.221155       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1102 14:15:38.221155       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1102 14:15:38.221402       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [b2cda95c0fa73867332aa42c9cd6aad92c60f000d6837089bac2ad63937e9752] <==
	I1102 14:15:33.150931       1 serving.go:386] Generated self-signed cert in-memory
	W1102 14:15:36.023111       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1102 14:15:36.023149       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1102 14:15:36.023161       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1102 14:15:36.023169       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1102 14:15:36.283379       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1102 14:15:36.283416       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 14:15:36.295067       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1102 14:15:36.300289       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 14:15:36.300328       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 14:15:36.300355       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1102 14:15:36.400820       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 02 14:15:40 embed-certs-955646 kubelet[809]: W1102 14:15:40.676443     809 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/30c758ef671a78927180fb97bde8b1a9a094e365398db33e109ee628c96e6553/crio-f448eec3af59809dce2ed4caa01733c011535b4c550762572e14499c48ccd01a WatchSource:0}: Error finding container f448eec3af59809dce2ed4caa01733c011535b4c550762572e14499c48ccd01a: Status 404 returned error can't find the container with id f448eec3af59809dce2ed4caa01733c011535b4c550762572e14499c48ccd01a
	Nov 02 14:15:45 embed-certs-955646 kubelet[809]: I1102 14:15:45.350995     809 scope.go:117] "RemoveContainer" containerID="77ab14028e37bbbf25fad57eabba4438c2f6cec2e5f1f8786c187a5a69487ea6"
	Nov 02 14:15:45 embed-certs-955646 kubelet[809]: I1102 14:15:45.823854     809 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 02 14:15:46 embed-certs-955646 kubelet[809]: I1102 14:15:46.363099     809 scope.go:117] "RemoveContainer" containerID="77ab14028e37bbbf25fad57eabba4438c2f6cec2e5f1f8786c187a5a69487ea6"
	Nov 02 14:15:46 embed-certs-955646 kubelet[809]: I1102 14:15:46.363379     809 scope.go:117] "RemoveContainer" containerID="20cc841cc1483bf8a564bf8cb939fd7cbd42c4f558af3adbc66ab2b60ce82029"
	Nov 02 14:15:46 embed-certs-955646 kubelet[809]: E1102 14:15:46.363514     809 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-d5qv6_kubernetes-dashboard(b9ae9591-80b7-4e44-8568-9b84dcc46139)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d5qv6" podUID="b9ae9591-80b7-4e44-8568-9b84dcc46139"
	Nov 02 14:15:47 embed-certs-955646 kubelet[809]: I1102 14:15:47.369120     809 scope.go:117] "RemoveContainer" containerID="20cc841cc1483bf8a564bf8cb939fd7cbd42c4f558af3adbc66ab2b60ce82029"
	Nov 02 14:15:47 embed-certs-955646 kubelet[809]: E1102 14:15:47.370051     809 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-d5qv6_kubernetes-dashboard(b9ae9591-80b7-4e44-8568-9b84dcc46139)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d5qv6" podUID="b9ae9591-80b7-4e44-8568-9b84dcc46139"
	Nov 02 14:15:50 embed-certs-955646 kubelet[809]: I1102 14:15:50.617583     809 scope.go:117] "RemoveContainer" containerID="20cc841cc1483bf8a564bf8cb939fd7cbd42c4f558af3adbc66ab2b60ce82029"
	Nov 02 14:15:50 embed-certs-955646 kubelet[809]: E1102 14:15:50.617806     809 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-d5qv6_kubernetes-dashboard(b9ae9591-80b7-4e44-8568-9b84dcc46139)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d5qv6" podUID="b9ae9591-80b7-4e44-8568-9b84dcc46139"
	Nov 02 14:16:01 embed-certs-955646 kubelet[809]: I1102 14:16:01.272710     809 scope.go:117] "RemoveContainer" containerID="20cc841cc1483bf8a564bf8cb939fd7cbd42c4f558af3adbc66ab2b60ce82029"
	Nov 02 14:16:01 embed-certs-955646 kubelet[809]: I1102 14:16:01.407707     809 scope.go:117] "RemoveContainer" containerID="20cc841cc1483bf8a564bf8cb939fd7cbd42c4f558af3adbc66ab2b60ce82029"
	Nov 02 14:16:01 embed-certs-955646 kubelet[809]: I1102 14:16:01.407997     809 scope.go:117] "RemoveContainer" containerID="ef426e5a083cea762ab338335868092fbefffe917112753aa8640fc71db0b316"
	Nov 02 14:16:01 embed-certs-955646 kubelet[809]: E1102 14:16:01.408150     809 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-d5qv6_kubernetes-dashboard(b9ae9591-80b7-4e44-8568-9b84dcc46139)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d5qv6" podUID="b9ae9591-80b7-4e44-8568-9b84dcc46139"
	Nov 02 14:16:01 embed-certs-955646 kubelet[809]: I1102 14:16:01.429689     809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hp5zz" podStartSLOduration=12.452817982 podStartE2EDuration="21.429648133s" podCreationTimestamp="2025-11-02 14:15:40 +0000 UTC" firstStartedPulling="2025-11-02 14:15:40.679052967 +0000 UTC m=+10.692216333" lastFinishedPulling="2025-11-02 14:15:49.655883118 +0000 UTC m=+19.669046484" observedRunningTime="2025-11-02 14:15:50.393787955 +0000 UTC m=+20.406951345" watchObservedRunningTime="2025-11-02 14:16:01.429648133 +0000 UTC m=+31.442811589"
	Nov 02 14:16:08 embed-certs-955646 kubelet[809]: I1102 14:16:08.426563     809 scope.go:117] "RemoveContainer" containerID="a67c468c1c763f272b0a9c52725437da784062dfc833ddc965b3fbeb8cca238c"
	Nov 02 14:16:10 embed-certs-955646 kubelet[809]: I1102 14:16:10.618660     809 scope.go:117] "RemoveContainer" containerID="ef426e5a083cea762ab338335868092fbefffe917112753aa8640fc71db0b316"
	Nov 02 14:16:10 embed-certs-955646 kubelet[809]: E1102 14:16:10.618834     809 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-d5qv6_kubernetes-dashboard(b9ae9591-80b7-4e44-8568-9b84dcc46139)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d5qv6" podUID="b9ae9591-80b7-4e44-8568-9b84dcc46139"
	Nov 02 14:16:22 embed-certs-955646 kubelet[809]: I1102 14:16:22.272349     809 scope.go:117] "RemoveContainer" containerID="ef426e5a083cea762ab338335868092fbefffe917112753aa8640fc71db0b316"
	Nov 02 14:16:22 embed-certs-955646 kubelet[809]: I1102 14:16:22.462299     809 scope.go:117] "RemoveContainer" containerID="ef426e5a083cea762ab338335868092fbefffe917112753aa8640fc71db0b316"
	Nov 02 14:16:23 embed-certs-955646 kubelet[809]: I1102 14:16:23.466078     809 scope.go:117] "RemoveContainer" containerID="f13f3bd50aa03124042acc9a5370ba88dc79123989a79b756e5c45dc349d6017"
	Nov 02 14:16:23 embed-certs-955646 kubelet[809]: E1102 14:16:23.466237     809 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-d5qv6_kubernetes-dashboard(b9ae9591-80b7-4e44-8568-9b84dcc46139)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d5qv6" podUID="b9ae9591-80b7-4e44-8568-9b84dcc46139"
	Nov 02 14:16:28 embed-certs-955646 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 02 14:16:28 embed-certs-955646 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 02 14:16:28 embed-certs-955646 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [9bc5d7391685e8a5cd00f7fa23ac16536bf7625564882416c6a722715462fbb0] <==
	2025/11/02 14:15:49 Using namespace: kubernetes-dashboard
	2025/11/02 14:15:49 Using in-cluster config to connect to apiserver
	2025/11/02 14:15:49 Using secret token for csrf signing
	2025/11/02 14:15:49 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/02 14:15:49 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/02 14:15:49 Successful initial request to the apiserver, version: v1.34.1
	2025/11/02 14:15:49 Generating JWE encryption key
	2025/11/02 14:15:49 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/02 14:15:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/02 14:15:50 Initializing JWE encryption key from synchronized object
	2025/11/02 14:15:50 Creating in-cluster Sidecar client
	2025/11/02 14:15:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/02 14:15:50 Serving insecurely on HTTP port: 9090
	2025/11/02 14:16:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/02 14:15:49 Starting overwatch
	
	
	==> storage-provisioner [6da0839e1d2999ea3f428a5acfe5a837f84636d51064907eb29cfe7f2701f8b4] <==
	I1102 14:16:08.486571       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1102 14:16:08.530110       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1102 14:16:08.533851       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1102 14:16:08.546115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:16:12.011907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:16:16.272515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:16:19.870743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:16:22.925089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:16:25.947573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:16:25.955318       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1102 14:16:25.955500       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1102 14:16:25.955702       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-955646_d83474a9-2881-43da-a511-7c1321de2e68!
	I1102 14:16:25.956463       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"192779df-44b5-4e02-8171-660f368cbc29", APIVersion:"v1", ResourceVersion:"690", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-955646_d83474a9-2881-43da-a511-7c1321de2e68 became leader
	W1102 14:16:25.959068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:16:25.965631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1102 14:16:26.056562       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-955646_d83474a9-2881-43da-a511-7c1321de2e68!
	W1102 14:16:27.972801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:16:27.982365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:16:29.986520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:16:29.996064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:16:32.000055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:16:32.008954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [a67c468c1c763f272b0a9c52725437da784062dfc833ddc965b3fbeb8cca238c] <==
	I1102 14:15:37.714097       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1102 14:16:07.726868       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-955646 -n embed-certs-955646
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-955646 -n embed-certs-955646: exit status 2 (529.086559ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-955646 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-955646
helpers_test.go:243: (dbg) docker inspect embed-certs-955646:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "30c758ef671a78927180fb97bde8b1a9a094e365398db33e109ee628c96e6553",
	        "Created": "2025-11-02T14:13:39.788499711Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 493513,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-02T14:15:23.189624529Z",
	            "FinishedAt": "2025-11-02T14:15:22.382373342Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/30c758ef671a78927180fb97bde8b1a9a094e365398db33e109ee628c96e6553/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/30c758ef671a78927180fb97bde8b1a9a094e365398db33e109ee628c96e6553/hostname",
	        "HostsPath": "/var/lib/docker/containers/30c758ef671a78927180fb97bde8b1a9a094e365398db33e109ee628c96e6553/hosts",
	        "LogPath": "/var/lib/docker/containers/30c758ef671a78927180fb97bde8b1a9a094e365398db33e109ee628c96e6553/30c758ef671a78927180fb97bde8b1a9a094e365398db33e109ee628c96e6553-json.log",
	        "Name": "/embed-certs-955646",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-955646:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-955646",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "30c758ef671a78927180fb97bde8b1a9a094e365398db33e109ee628c96e6553",
	                "LowerDir": "/var/lib/docker/overlay2/8c504e43823d68c8b3c159a922e06da89536ef8a80c163fcf27d6116fa985aa4-init/diff:/var/lib/docker/overlay2/53058afe6acd7391f639f551e07f446f6a39a991b699aea18507cb21a1652e97/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8c504e43823d68c8b3c159a922e06da89536ef8a80c163fcf27d6116fa985aa4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8c504e43823d68c8b3c159a922e06da89536ef8a80c163fcf27d6116fa985aa4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8c504e43823d68c8b3c159a922e06da89536ef8a80c163fcf27d6116fa985aa4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-955646",
	                "Source": "/var/lib/docker/volumes/embed-certs-955646/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-955646",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-955646",
	                "name.minikube.sigs.k8s.io": "embed-certs-955646",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1dce9e91e08b5bd594afc61784d9232ca04e56e0854ba18d8060d020ff1d7a8d",
	            "SandboxKey": "/var/run/docker/netns/1dce9e91e08b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-955646": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:3d:5a:15:68:18",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d85ba2fbd0cbee1971516307c8078f5176011d8f2e54e2718a749b7827caba3c",
	                    "EndpointID": "d16786a26dad3e180e8ab0b8e2a1de12c4b4150e2c47c812126b20ed99a80c71",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-955646",
	                        "30c758ef671a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-955646 -n embed-certs-955646
E1102 14:16:34.050726  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-955646 -n embed-certs-955646: exit status 2 (507.003047ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-955646 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-955646 logs -n 25: (1.860159011s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p old-k8s-version-873713                                                                                                                                                │ old-k8s-version-873713       │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │ 02 Nov 25 14:11 UTC │
	│ delete  │ -p old-k8s-version-873713                                                                                                                                                │ old-k8s-version-873713       │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │ 02 Nov 25 14:11 UTC │
	│ start   │ -p no-preload-150469 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:11 UTC │ 02 Nov 25 14:12 UTC │
	│ addons  │ enable metrics-server -p no-preload-150469 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │                     │
	│ stop    │ -p no-preload-150469 --alsologtostderr -v=3                                                                                                                              │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │ 02 Nov 25 14:13 UTC │
	│ addons  │ enable dashboard -p no-preload-150469 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │ 02 Nov 25 14:13 UTC │
	│ start   │ -p no-preload-150469 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │ 02 Nov 25 14:14 UTC │
	│ delete  │ -p cert-expiration-114321                                                                                                                                                │ cert-expiration-114321       │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │ 02 Nov 25 14:13 UTC │
	│ start   │ -p embed-certs-955646 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │ 02 Nov 25 14:14 UTC │
	│ image   │ no-preload-150469 image list --format=json                                                                                                                               │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │ 02 Nov 25 14:14 UTC │
	│ pause   │ -p no-preload-150469 --alsologtostderr -v=1                                                                                                                              │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │                     │
	│ delete  │ -p no-preload-150469                                                                                                                                                     │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │ 02 Nov 25 14:14 UTC │
	│ delete  │ -p no-preload-150469                                                                                                                                                     │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │ 02 Nov 25 14:14 UTC │
	│ delete  │ -p disable-driver-mounts-720030                                                                                                                                          │ disable-driver-mounts-720030 │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │ 02 Nov 25 14:14 UTC │
	│ start   │ -p default-k8s-diff-port-786183 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │ 02 Nov 25 14:15 UTC │
	│ addons  │ enable metrics-server -p embed-certs-955646 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:15 UTC │                     │
	│ stop    │ -p embed-certs-955646 --alsologtostderr -v=3                                                                                                                             │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:15 UTC │ 02 Nov 25 14:15 UTC │
	│ addons  │ enable dashboard -p embed-certs-955646 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:15 UTC │ 02 Nov 25 14:15 UTC │
	│ start   │ -p embed-certs-955646 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:15 UTC │ 02 Nov 25 14:16 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-786183 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                       │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-786183 --alsologtostderr -v=3                                                                                                                   │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │ 02 Nov 25 14:16 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-786183 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                  │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │ 02 Nov 25 14:16 UTC │
	│ start   │ -p default-k8s-diff-port-786183 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │                     │
	│ image   │ embed-certs-955646 image list --format=json                                                                                                                              │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │ 02 Nov 25 14:16 UTC │
	│ pause   │ -p embed-certs-955646 --alsologtostderr -v=1                                                                                                                             │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 14:16:21
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 14:16:21.873195  496485 out.go:360] Setting OutFile to fd 1 ...
	I1102 14:16:21.873358  496485 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:16:21.873371  496485 out.go:374] Setting ErrFile to fd 2...
	I1102 14:16:21.873377  496485 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:16:21.873664  496485 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 14:16:21.874070  496485 out.go:368] Setting JSON to false
	I1102 14:16:21.875115  496485 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10734,"bootTime":1762082248,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1102 14:16:21.875185  496485 start.go:143] virtualization:  
	I1102 14:16:21.880130  496485 out.go:179] * [default-k8s-diff-port-786183] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1102 14:16:21.883242  496485 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 14:16:21.883329  496485 notify.go:221] Checking for updates...
	I1102 14:16:21.888991  496485 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 14:16:21.891996  496485 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-293314/kubeconfig
	I1102 14:16:21.894990  496485 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-293314/.minikube
	I1102 14:16:21.897745  496485 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1102 14:16:21.900682  496485 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 14:16:21.904143  496485 config.go:182] Loaded profile config "default-k8s-diff-port-786183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:16:21.904759  496485 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 14:16:21.935721  496485 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1102 14:16:21.935825  496485 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:16:21.990509  496485 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-02 14:16:21.980584668 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:16:21.990649  496485 docker.go:319] overlay module found
	I1102 14:16:21.993827  496485 out.go:179] * Using the docker driver based on existing profile
	I1102 14:16:21.996835  496485 start.go:309] selected driver: docker
	I1102 14:16:21.996929  496485 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-786183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-786183 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 14:16:21.997055  496485 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 14:16:21.997774  496485 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:16:22.061036  496485 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-02 14:16:22.05143499 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:16:22.061397  496485 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 14:16:22.061434  496485 cni.go:84] Creating CNI manager for ""
	I1102 14:16:22.061490  496485 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 14:16:22.061532  496485 start.go:353] cluster config:
	{Name:default-k8s-diff-port-786183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-786183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 14:16:22.064761  496485 out.go:179] * Starting "default-k8s-diff-port-786183" primary control-plane node in "default-k8s-diff-port-786183" cluster
	I1102 14:16:22.067748  496485 cache.go:124] Beginning downloading kic base image for docker with crio
	I1102 14:16:22.070825  496485 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1102 14:16:22.073784  496485 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 14:16:22.073864  496485 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1102 14:16:22.073875  496485 cache.go:59] Caching tarball of preloaded images
	I1102 14:16:22.073872  496485 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1102 14:16:22.073962  496485 preload.go:233] Found /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1102 14:16:22.073972  496485 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1102 14:16:22.074087  496485 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/config.json ...
	I1102 14:16:22.094336  496485 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1102 14:16:22.094365  496485 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1102 14:16:22.094474  496485 cache.go:233] Successfully downloaded all kic artifacts
	I1102 14:16:22.094506  496485 start.go:360] acquireMachinesLock for default-k8s-diff-port-786183: {Name:mk74a3791f8141b365a89e0370ddc0301da720d9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 14:16:22.094582  496485 start.go:364] duration metric: took 46.696µs to acquireMachinesLock for "default-k8s-diff-port-786183"
	I1102 14:16:22.094671  496485 start.go:96] Skipping create...Using existing machine configuration
	I1102 14:16:22.094686  496485 fix.go:54] fixHost starting: 
	I1102 14:16:22.094978  496485 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-786183 --format={{.State.Status}}
	I1102 14:16:22.112060  496485 fix.go:112] recreateIfNeeded on default-k8s-diff-port-786183: state=Stopped err=<nil>
	W1102 14:16:22.112091  496485 fix.go:138] unexpected machine state, will restart: <nil>
	I1102 14:16:22.115387  496485 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-786183" ...
	I1102 14:16:22.115481  496485 cli_runner.go:164] Run: docker start default-k8s-diff-port-786183
	I1102 14:16:22.397170  496485 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-786183 --format={{.State.Status}}
	I1102 14:16:22.419938  496485 kic.go:430] container "default-k8s-diff-port-786183" state is running.
	I1102 14:16:22.420332  496485 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-786183
	I1102 14:16:22.445769  496485 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/config.json ...
	I1102 14:16:22.446008  496485 machine.go:94] provisionDockerMachine start ...
	I1102 14:16:22.446073  496485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-786183
	I1102 14:16:22.477092  496485 main.go:143] libmachine: Using SSH client type: native
	I1102 14:16:22.477414  496485 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33456 <nil> <nil>}
	I1102 14:16:22.477427  496485 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 14:16:22.481066  496485 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1102 14:16:25.634250  496485 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-786183
	
	I1102 14:16:25.634284  496485 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-786183"
	I1102 14:16:25.634348  496485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-786183
	I1102 14:16:25.652952  496485 main.go:143] libmachine: Using SSH client type: native
	I1102 14:16:25.653273  496485 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33456 <nil> <nil>}
	I1102 14:16:25.653291  496485 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-786183 && echo "default-k8s-diff-port-786183" | sudo tee /etc/hostname
	I1102 14:16:25.812077  496485 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-786183
	
	I1102 14:16:25.812161  496485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-786183
	I1102 14:16:25.830411  496485 main.go:143] libmachine: Using SSH client type: native
	I1102 14:16:25.830770  496485 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33456 <nil> <nil>}
	I1102 14:16:25.830796  496485 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-786183' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-786183/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-786183' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 14:16:25.978930  496485 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 14:16:25.979001  496485 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-293314/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-293314/.minikube}
	I1102 14:16:25.979028  496485 ubuntu.go:190] setting up certificates
	I1102 14:16:25.979042  496485 provision.go:84] configureAuth start
	I1102 14:16:25.979120  496485 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-786183
	I1102 14:16:25.996587  496485 provision.go:143] copyHostCerts
	I1102 14:16:25.996741  496485 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem, removing ...
	I1102 14:16:25.996769  496485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem
	I1102 14:16:25.996851  496485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem (1082 bytes)
	I1102 14:16:25.997026  496485 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem, removing ...
	I1102 14:16:25.997055  496485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem
	I1102 14:16:25.997098  496485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem (1123 bytes)
	I1102 14:16:25.997222  496485 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem, removing ...
	I1102 14:16:25.997234  496485 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem
	I1102 14:16:25.997271  496485 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem (1675 bytes)
	I1102 14:16:25.997384  496485 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-786183 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-786183 localhost minikube]
	I1102 14:16:26.641323  496485 provision.go:177] copyRemoteCerts
	I1102 14:16:26.641426  496485 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 14:16:26.641493  496485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-786183
	I1102 14:16:26.659009  496485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/default-k8s-diff-port-786183/id_rsa Username:docker}
	I1102 14:16:26.766331  496485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1102 14:16:26.783691  496485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1102 14:16:26.800779  496485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1102 14:16:26.817392  496485 provision.go:87] duration metric: took 838.326052ms to configureAuth
	I1102 14:16:26.817428  496485 ubuntu.go:206] setting minikube options for container-runtime
	I1102 14:16:26.817611  496485 config.go:182] Loaded profile config "default-k8s-diff-port-786183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:16:26.817711  496485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-786183
	I1102 14:16:26.834756  496485 main.go:143] libmachine: Using SSH client type: native
	I1102 14:16:26.835082  496485 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33456 <nil> <nil>}
	I1102 14:16:26.835102  496485 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1102 14:16:27.157533  496485 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1102 14:16:27.157558  496485 machine.go:97] duration metric: took 4.711532586s to provisionDockerMachine
	I1102 14:16:27.157568  496485 start.go:293] postStartSetup for "default-k8s-diff-port-786183" (driver="docker")
	I1102 14:16:27.157579  496485 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1102 14:16:27.157652  496485 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1102 14:16:27.157742  496485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-786183
	I1102 14:16:27.179264  496485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/default-k8s-diff-port-786183/id_rsa Username:docker}
	I1102 14:16:27.286322  496485 ssh_runner.go:195] Run: cat /etc/os-release
	I1102 14:16:27.289468  496485 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1102 14:16:27.289497  496485 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1102 14:16:27.289508  496485 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-293314/.minikube/addons for local assets ...
	I1102 14:16:27.289561  496485 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-293314/.minikube/files for local assets ...
	I1102 14:16:27.289651  496485 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem -> 2951742.pem in /etc/ssl/certs
	I1102 14:16:27.289757  496485 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1102 14:16:27.297080  496485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem --> /etc/ssl/certs/2951742.pem (1708 bytes)
	I1102 14:16:27.314458  496485 start.go:296] duration metric: took 156.875112ms for postStartSetup
	I1102 14:16:27.314534  496485 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 14:16:27.314609  496485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-786183
	I1102 14:16:27.332520  496485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/default-k8s-diff-port-786183/id_rsa Username:docker}
	I1102 14:16:27.431770  496485 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1102 14:16:27.436621  496485 fix.go:56] duration metric: took 5.341927747s for fixHost
	I1102 14:16:27.436646  496485 start.go:83] releasing machines lock for "default-k8s-diff-port-786183", held for 5.342036195s
	I1102 14:16:27.436740  496485 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-786183
	I1102 14:16:27.452995  496485 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem (1338 bytes)
	W1102 14:16:27.453062  496485 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174_empty.pem, impossibly tiny 0 bytes
	I1102 14:16:27.453086  496485 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem (1679 bytes)
	I1102 14:16:27.453119  496485 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem (1082 bytes)
	I1102 14:16:27.453147  496485 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem (1123 bytes)
	I1102 14:16:27.453174  496485 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem (1675 bytes)
	I1102 14:16:27.453222  496485 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem (1708 bytes)
	I1102 14:16:27.453290  496485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 14:16:27.453346  496485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-786183
	I1102 14:16:27.469862  496485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/default-k8s-diff-port-786183/id_rsa Username:docker}
	I1102 14:16:27.583147  496485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem --> /usr/share/ca-certificates/295174.pem (1338 bytes)
	I1102 14:16:27.604825  496485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem --> /usr/share/ca-certificates/2951742.pem (1708 bytes)
	I1102 14:16:27.629102  496485 ssh_runner.go:195] Run: openssl version
	I1102 14:16:27.635415  496485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951742.pem && ln -fs /usr/share/ca-certificates/2951742.pem /etc/ssl/certs/2951742.pem"
	I1102 14:16:27.643572  496485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951742.pem
	I1102 14:16:27.647230  496485 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 13:20 /usr/share/ca-certificates/2951742.pem
	I1102 14:16:27.647293  496485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951742.pem
	I1102 14:16:27.688624  496485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951742.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 14:16:27.696410  496485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 14:16:27.704445  496485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:16:27.708179  496485 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 13:13 /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:16:27.708285  496485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:16:27.749811  496485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 14:16:27.758985  496485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295174.pem && ln -fs /usr/share/ca-certificates/295174.pem /etc/ssl/certs/295174.pem"
	I1102 14:16:27.770257  496485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295174.pem
	I1102 14:16:27.774454  496485 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 13:20 /usr/share/ca-certificates/295174.pem
	I1102 14:16:27.774516  496485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295174.pem
	I1102 14:16:27.816820  496485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295174.pem /etc/ssl/certs/51391683.0"
	I1102 14:16:27.825561  496485 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1102 14:16:27.829262  496485 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1102 14:16:27.832845  496485 ssh_runner.go:195] Run: cat /version.json
	I1102 14:16:27.832917  496485 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1102 14:16:27.930935  496485 ssh_runner.go:195] Run: systemctl --version
	I1102 14:16:27.939595  496485 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1102 14:16:27.993963  496485 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1102 14:16:27.999290  496485 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1102 14:16:27.999373  496485 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1102 14:16:28.010226  496485 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1102 14:16:28.010254  496485 start.go:496] detecting cgroup driver to use...
	I1102 14:16:28.010291  496485 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1102 14:16:28.010341  496485 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1102 14:16:28.034864  496485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1102 14:16:28.052023  496485 docker.go:218] disabling cri-docker service (if available) ...
	I1102 14:16:28.052081  496485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1102 14:16:28.069486  496485 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1102 14:16:28.085165  496485 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1102 14:16:28.249546  496485 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1102 14:16:28.407544  496485 docker.go:234] disabling docker service ...
	I1102 14:16:28.407594  496485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1102 14:16:28.427617  496485 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1102 14:16:28.441655  496485 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1102 14:16:28.596283  496485 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1102 14:16:28.759340  496485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1102 14:16:28.773992  496485 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1102 14:16:28.792825  496485 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1102 14:16:28.792889  496485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:16:28.802698  496485 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1102 14:16:28.802777  496485 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:16:28.814119  496485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:16:28.830587  496485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:16:28.840526  496485 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1102 14:16:28.849641  496485 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:16:28.859697  496485 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:16:28.869822  496485 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:16:28.879865  496485 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1102 14:16:28.887707  496485 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1102 14:16:28.895173  496485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 14:16:29.064493  496485 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1102 14:16:29.192748  496485 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1102 14:16:29.192858  496485 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1102 14:16:29.203058  496485 start.go:564] Will wait 60s for crictl version
	I1102 14:16:29.203179  496485 ssh_runner.go:195] Run: which crictl
	I1102 14:16:29.206818  496485 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1102 14:16:29.267957  496485 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1102 14:16:29.268052  496485 ssh_runner.go:195] Run: crio --version
	I1102 14:16:29.323375  496485 ssh_runner.go:195] Run: crio --version
	I1102 14:16:29.365301  496485 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1102 14:16:29.368165  496485 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-786183 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 14:16:29.384352  496485 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1102 14:16:29.388847  496485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 14:16:29.398319  496485 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-786183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-786183 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1102 14:16:29.398443  496485 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 14:16:29.398501  496485 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 14:16:29.441311  496485 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 14:16:29.441335  496485 crio.go:433] Images already preloaded, skipping extraction
	I1102 14:16:29.441399  496485 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 14:16:29.472929  496485 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 14:16:29.472954  496485 cache_images.go:86] Images are preloaded, skipping loading
	I1102 14:16:29.472963  496485 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1102 14:16:29.473112  496485 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-786183 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-786183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1102 14:16:29.473216  496485 ssh_runner.go:195] Run: crio config
	I1102 14:16:29.537085  496485 cni.go:84] Creating CNI manager for ""
	I1102 14:16:29.537108  496485 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 14:16:29.537119  496485 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1102 14:16:29.537143  496485 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-786183 NodeName:default-k8s-diff-port-786183 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1102 14:16:29.537277  496485 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-786183"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1102 14:16:29.537357  496485 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1102 14:16:29.545057  496485 binaries.go:44] Found k8s binaries, skipping transfer
	I1102 14:16:29.545175  496485 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1102 14:16:29.552760  496485 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1102 14:16:29.565442  496485 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1102 14:16:29.579239  496485 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1102 14:16:29.592230  496485 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1102 14:16:29.595971  496485 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 14:16:29.605911  496485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 14:16:29.723784  496485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 14:16:29.739500  496485 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183 for IP: 192.168.76.2
	I1102 14:16:29.739560  496485 certs.go:195] generating shared ca certs ...
	I1102 14:16:29.739590  496485 certs.go:227] acquiring lock for ca certs: {Name:mkead50075949a3cdc798f9c0149a2bc2638cbbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:16:29.739742  496485 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-293314/.minikube/ca.key
	I1102 14:16:29.739825  496485 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.key
	I1102 14:16:29.739850  496485 certs.go:257] generating profile certs ...
	I1102 14:16:29.739977  496485 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/client.key
	I1102 14:16:29.740083  496485 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/apiserver.key.995a17bc
	I1102 14:16:29.740161  496485 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/proxy-client.key
	I1102 14:16:29.740304  496485 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem (1338 bytes)
	W1102 14:16:29.740366  496485 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174_empty.pem, impossibly tiny 0 bytes
	I1102 14:16:29.740395  496485 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem (1679 bytes)
	I1102 14:16:29.740450  496485 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem (1082 bytes)
	I1102 14:16:29.740506  496485 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem (1123 bytes)
	I1102 14:16:29.740560  496485 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem (1675 bytes)
	I1102 14:16:29.740631  496485 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem (1708 bytes)
	I1102 14:16:29.741246  496485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1102 14:16:29.765032  496485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1102 14:16:29.785295  496485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1102 14:16:29.806529  496485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1102 14:16:29.828579  496485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1102 14:16:29.856745  496485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1102 14:16:29.879089  496485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1102 14:16:29.902079  496485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1102 14:16:29.931054  496485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem --> /usr/share/ca-certificates/295174.pem (1338 bytes)
	I1102 14:16:29.953137  496485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem --> /usr/share/ca-certificates/2951742.pem (1708 bytes)
	I1102 14:16:29.976356  496485 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 14:16:30.015696  496485 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1102 14:16:30.061167  496485 ssh_runner.go:195] Run: openssl version
	I1102 14:16:30.078428  496485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295174.pem && ln -fs /usr/share/ca-certificates/295174.pem /etc/ssl/certs/295174.pem"
	I1102 14:16:30.097607  496485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295174.pem
	I1102 14:16:30.102597  496485 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 13:20 /usr/share/ca-certificates/295174.pem
	I1102 14:16:30.102741  496485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295174.pem
	I1102 14:16:30.160284  496485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295174.pem /etc/ssl/certs/51391683.0"
	I1102 14:16:30.169480  496485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951742.pem && ln -fs /usr/share/ca-certificates/2951742.pem /etc/ssl/certs/2951742.pem"
	I1102 14:16:30.186861  496485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951742.pem
	I1102 14:16:30.192125  496485 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 13:20 /usr/share/ca-certificates/2951742.pem
	I1102 14:16:30.192245  496485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951742.pem
	I1102 14:16:30.236731  496485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951742.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 14:16:30.245305  496485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 14:16:30.254547  496485 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:16:30.259337  496485 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 13:13 /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:16:30.259454  496485 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:16:30.304551  496485 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 14:16:30.313559  496485 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1102 14:16:30.324060  496485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1102 14:16:30.436308  496485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1102 14:16:30.501977  496485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1102 14:16:30.584678  496485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1102 14:16:30.743288  496485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1102 14:16:30.845787  496485 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1102 14:16:30.959675  496485 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-786183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-786183 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 14:16:30.959764  496485 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 14:16:30.959830  496485 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 14:16:31.049567  496485 cri.go:89] found id: "f6bef86f73f59250e354d0fdd9e49760329ba2e76d5a2c9140645b949b671c4d"
	I1102 14:16:31.049586  496485 cri.go:89] found id: "6d9b69e73df509198b2e29494a4484507c8a14cccb6a2b6302b756a3c2183899"
	I1102 14:16:31.049599  496485 cri.go:89] found id: "d53a1eafeb3bc7e2100e0bcf284f029edbffd71be60582127cbabe95881a86ac"
	I1102 14:16:31.049604  496485 cri.go:89] found id: "312cee2bec817cdd2e35981ea4410dfbe7dc6c1e95635e12a5f8648c6f301ff1"
	I1102 14:16:31.049608  496485 cri.go:89] found id: ""
	I1102 14:16:31.049653  496485 ssh_runner.go:195] Run: sudo runc list -f json
	W1102 14:16:31.108813  496485 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T14:16:31Z" level=error msg="open /run/runc: no such file or directory"
	I1102 14:16:31.108905  496485 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1102 14:16:31.128233  496485 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1102 14:16:31.128251  496485 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1102 14:16:31.128313  496485 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1102 14:16:31.146420  496485 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1102 14:16:31.147299  496485 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-786183" does not appear in /home/jenkins/minikube-integration/21808-293314/kubeconfig
	I1102 14:16:31.147811  496485 kubeconfig.go:62] /home/jenkins/minikube-integration/21808-293314/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-786183" cluster setting kubeconfig missing "default-k8s-diff-port-786183" context setting]
	I1102 14:16:31.148555  496485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/kubeconfig: {Name:mke5a65554da8fc0fd6a2ea60bed899d5b38ce09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:16:31.150915  496485 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1102 14:16:31.160833  496485 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1102 14:16:31.160866  496485 kubeadm.go:602] duration metric: took 32.608832ms to restartPrimaryControlPlane
	I1102 14:16:31.160875  496485 kubeadm.go:403] duration metric: took 201.209672ms to StartCluster
	I1102 14:16:31.160890  496485 settings.go:142] acquiring lock: {Name:mk95f66b3b15e63f58f8c9085c1ffe67cc396dc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:16:31.160968  496485 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-293314/kubeconfig
	I1102 14:16:31.162458  496485 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/kubeconfig: {Name:mke5a65554da8fc0fd6a2ea60bed899d5b38ce09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:16:31.163236  496485 config.go:182] Loaded profile config "default-k8s-diff-port-786183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:16:31.163021  496485 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 14:16:31.163352  496485 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1102 14:16:31.163593  496485 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-786183"
	I1102 14:16:31.163612  496485 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-786183"
	W1102 14:16:31.163618  496485 addons.go:248] addon storage-provisioner should already be in state true
	I1102 14:16:31.163647  496485 host.go:66] Checking if "default-k8s-diff-port-786183" exists ...
	I1102 14:16:31.164146  496485 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-786183 --format={{.State.Status}}
	I1102 14:16:31.164312  496485 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-786183"
	I1102 14:16:31.164330  496485 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-786183"
	W1102 14:16:31.164336  496485 addons.go:248] addon dashboard should already be in state true
	I1102 14:16:31.164385  496485 host.go:66] Checking if "default-k8s-diff-port-786183" exists ...
	I1102 14:16:31.164827  496485 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-786183 --format={{.State.Status}}
	I1102 14:16:31.165238  496485 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-786183"
	I1102 14:16:31.165293  496485 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-786183"
	I1102 14:16:31.165635  496485 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-786183 --format={{.State.Status}}
	I1102 14:16:31.167192  496485 out.go:179] * Verifying Kubernetes components...
	I1102 14:16:31.172935  496485 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 14:16:31.215159  496485 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-786183"
	W1102 14:16:31.215181  496485 addons.go:248] addon default-storageclass should already be in state true
	I1102 14:16:31.215205  496485 host.go:66] Checking if "default-k8s-diff-port-786183" exists ...
	I1102 14:16:31.215616  496485 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-786183 --format={{.State.Status}}
	I1102 14:16:31.248138  496485 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 14:16:31.252258  496485 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 14:16:31.252278  496485 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1102 14:16:31.252338  496485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-786183
	I1102 14:16:31.254665  496485 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1102 14:16:31.262763  496485 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1102 14:16:31.265637  496485 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1102 14:16:31.265663  496485 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1102 14:16:31.265736  496485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-786183
	I1102 14:16:31.300128  496485 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1102 14:16:31.300153  496485 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1102 14:16:31.300215  496485 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-786183
	I1102 14:16:31.326813  496485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/default-k8s-diff-port-786183/id_rsa Username:docker}
	I1102 14:16:31.351928  496485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/default-k8s-diff-port-786183/id_rsa Username:docker}
	I1102 14:16:31.375448  496485 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/default-k8s-diff-port-786183/id_rsa Username:docker}
	I1102 14:16:31.602741  496485 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 14:16:31.628307  496485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 14:16:31.644242  496485 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-786183" to be "Ready" ...
	I1102 14:16:31.681575  496485 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1102 14:16:31.681602  496485 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1102 14:16:31.685402  496485 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1102 14:16:31.766334  496485 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1102 14:16:31.766361  496485 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1102 14:16:31.861770  496485 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1102 14:16:31.861794  496485 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	
	
	==> CRI-O <==
	Nov 02 14:16:18 embed-certs-955646 crio[684]: time="2025-11-02T14:16:18.125663332Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:16:18 embed-certs-955646 crio[684]: time="2025-11-02T14:16:18.128672059Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:16:18 embed-certs-955646 crio[684]: time="2025-11-02T14:16:18.12870598Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 14:16:18 embed-certs-955646 crio[684]: time="2025-11-02T14:16:18.128724704Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:16:18 embed-certs-955646 crio[684]: time="2025-11-02T14:16:18.131797596Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:16:18 embed-certs-955646 crio[684]: time="2025-11-02T14:16:18.131833264Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 14:16:18 embed-certs-955646 crio[684]: time="2025-11-02T14:16:18.131855788Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:16:18 embed-certs-955646 crio[684]: time="2025-11-02T14:16:18.134902317Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:16:18 embed-certs-955646 crio[684]: time="2025-11-02T14:16:18.13493979Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 14:16:18 embed-certs-955646 crio[684]: time="2025-11-02T14:16:18.134965242Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:16:18 embed-certs-955646 crio[684]: time="2025-11-02T14:16:18.138208696Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:16:18 embed-certs-955646 crio[684]: time="2025-11-02T14:16:18.138245472Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 14:16:22 embed-certs-955646 crio[684]: time="2025-11-02T14:16:22.273197913Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=a2ddad05-0afb-4ec1-8fd6-d70f8c4d10f4 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:16:22 embed-certs-955646 crio[684]: time="2025-11-02T14:16:22.27439566Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=22f591f9-106b-4e95-b69e-df1c59947274 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:16:22 embed-certs-955646 crio[684]: time="2025-11-02T14:16:22.27529395Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d5qv6/dashboard-metrics-scraper" id=3286d5b3-a7b5-4c17-a8de-bac2b7f422e7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:16:22 embed-certs-955646 crio[684]: time="2025-11-02T14:16:22.275411884Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:16:22 embed-certs-955646 crio[684]: time="2025-11-02T14:16:22.285226571Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:16:22 embed-certs-955646 crio[684]: time="2025-11-02T14:16:22.285846754Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:16:22 embed-certs-955646 crio[684]: time="2025-11-02T14:16:22.308631254Z" level=info msg="Created container f13f3bd50aa03124042acc9a5370ba88dc79123989a79b756e5c45dc349d6017: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d5qv6/dashboard-metrics-scraper" id=3286d5b3-a7b5-4c17-a8de-bac2b7f422e7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:16:22 embed-certs-955646 crio[684]: time="2025-11-02T14:16:22.309817966Z" level=info msg="Starting container: f13f3bd50aa03124042acc9a5370ba88dc79123989a79b756e5c45dc349d6017" id=8365893f-619d-4274-a672-3c0cece90d84 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 14:16:22 embed-certs-955646 crio[684]: time="2025-11-02T14:16:22.31208143Z" level=info msg="Started container" PID=1754 containerID=f13f3bd50aa03124042acc9a5370ba88dc79123989a79b756e5c45dc349d6017 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d5qv6/dashboard-metrics-scraper id=8365893f-619d-4274-a672-3c0cece90d84 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ef616118b55206871ad442bf8fb420c021545ad89a89db85b2bbe5fb568ac7bf
	Nov 02 14:16:22 embed-certs-955646 conmon[1752]: conmon f13f3bd50aa03124042a <ninfo>: container 1754 exited with status 1
	Nov 02 14:16:22 embed-certs-955646 crio[684]: time="2025-11-02T14:16:22.463567207Z" level=info msg="Removing container: ef426e5a083cea762ab338335868092fbefffe917112753aa8640fc71db0b316" id=20541653-9344-43e8-bf11-3ab69149f16a name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 02 14:16:22 embed-certs-955646 crio[684]: time="2025-11-02T14:16:22.479170498Z" level=info msg="Error loading conmon cgroup of container ef426e5a083cea762ab338335868092fbefffe917112753aa8640fc71db0b316: cgroup deleted" id=20541653-9344-43e8-bf11-3ab69149f16a name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 02 14:16:22 embed-certs-955646 crio[684]: time="2025-11-02T14:16:22.484141756Z" level=info msg="Removed container ef426e5a083cea762ab338335868092fbefffe917112753aa8640fc71db0b316: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d5qv6/dashboard-metrics-scraper" id=20541653-9344-43e8-bf11-3ab69149f16a name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	f13f3bd50aa03       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago       Exited              dashboard-metrics-scraper   3                   ef616118b5520       dashboard-metrics-scraper-6ffb444bf9-d5qv6   kubernetes-dashboard
	6da0839e1d299       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           27 seconds ago       Running             storage-provisioner         2                   7033724a49887       storage-provisioner                          kube-system
	9bc5d7391685e       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   45 seconds ago       Running             kubernetes-dashboard        0                   f448eec3af598       kubernetes-dashboard-855c9754f9-hp5zz        kubernetes-dashboard
	6a9aa1a8dc805       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           57 seconds ago       Running             busybox                     1                   6a247b5449b94       busybox                                      default
	00f450128bd23       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           57 seconds ago       Running             coredns                     1                   859daaed9acbe       coredns-66bc5c9577-h7hk7                     kube-system
	e62f6bc1f097c       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           57 seconds ago       Running             kindnet-cni                 1                   0aadb994c0cff       kindnet-fvxzq                                kube-system
	c8dd4cc06305b       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           57 seconds ago       Running             kube-proxy                  1                   890fc14d0bdd6       kube-proxy-hg44j                             kube-system
	a67c468c1c763       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           57 seconds ago       Exited              storage-provisioner         1                   7033724a49887       storage-provisioner                          kube-system
	f18821cbf7e95       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   12a676cc95d10       etcd-embed-certs-955646                      kube-system
	6eca59e1e6707       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   5d204b0427660       kube-controller-manager-embed-certs-955646   kube-system
	cf67be7dd17d4       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   d3f511b74c2e9       kube-apiserver-embed-certs-955646            kube-system
	b2cda95c0fa73       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   883ff6970e384       kube-scheduler-embed-certs-955646            kube-system
	
	
	==> coredns [00f450128bd23b7b717ccc9034de35182d4ca2b29f00a757b513c4ef9dae1e76] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43285 - 7107 "HINFO IN 590304181686656473.5239133587992657210. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.046539103s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-955646
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-955646
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a
	                    minikube.k8s.io/name=embed-certs-955646
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_02T14_14_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 02 Nov 2025 14:14:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-955646
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 02 Nov 2025 14:16:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 02 Nov 2025 14:16:07 +0000   Sun, 02 Nov 2025 14:14:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 02 Nov 2025 14:16:07 +0000   Sun, 02 Nov 2025 14:14:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 02 Nov 2025 14:16:07 +0000   Sun, 02 Nov 2025 14:14:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 02 Nov 2025 14:16:07 +0000   Sun, 02 Nov 2025 14:14:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-955646
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                b99c9ad5-2cef-4b58-868b-a11cc5355016
	  Boot ID:                    4c303cf4-c0b6-4c0d-aa1a-d7509feb15e7
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-h7hk7                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m21s
	  kube-system                 etcd-embed-certs-955646                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m25s
	  kube-system                 kindnet-fvxzq                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m21s
	  kube-system                 kube-apiserver-embed-certs-955646             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-controller-manager-embed-certs-955646    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-proxy-hg44j                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-scheduler-embed-certs-955646             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-d5qv6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-hp5zz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m20s                  kube-proxy       
	  Normal   Starting                 57s                    kube-proxy       
	  Normal   Starting                 2m34s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m34s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m33s (x8 over 2m33s)  kubelet          Node embed-certs-955646 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m33s (x8 over 2m33s)  kubelet          Node embed-certs-955646 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m33s (x8 over 2m33s)  kubelet          Node embed-certs-955646 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m26s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m26s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m25s                  kubelet          Node embed-certs-955646 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m25s                  kubelet          Node embed-certs-955646 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  2m25s                  kubelet          Node embed-certs-955646 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           2m22s                  node-controller  Node embed-certs-955646 event: Registered Node embed-certs-955646 in Controller
	  Normal   NodeReady                99s                    kubelet          Node embed-certs-955646 status is now: NodeReady
	  Normal   Starting                 65s                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 65s                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  65s (x8 over 65s)      kubelet          Node embed-certs-955646 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    65s (x8 over 65s)      kubelet          Node embed-certs-955646 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     65s (x8 over 65s)      kubelet          Node embed-certs-955646 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                    node-controller  Node embed-certs-955646 event: Registered Node embed-certs-955646 in Controller
	
	
	==> dmesg <==
	[Nov 2 13:56] overlayfs: idmapped layers are currently not supported
	[  +3.515963] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:57] overlayfs: idmapped layers are currently not supported
	[ +24.836033] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:58] overlayfs: idmapped layers are currently not supported
	[ +23.362553] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:59] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:01] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:02] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:03] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:04] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:06] overlayfs: idmapped layers are currently not supported
	[ +50.469589] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 2 14:07] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:08] overlayfs: idmapped layers are currently not supported
	[ +11.089512] overlayfs: idmapped layers are currently not supported
	[ +33.821233] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:09] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:10] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:11] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:13] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:14] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:15] overlayfs: idmapped layers are currently not supported
	[ +29.099512] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:16] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [f18821cbf7e95d9a372afcb877b644585ef683291b4420587127c19d9c80ed5d] <==
	{"level":"warn","ts":"2025-11-02T14:15:34.986062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.005379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.021674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.036282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.052552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.067738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.095294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.106038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.123733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.138793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.159516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.172366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.190025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.207468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.226918Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.235504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.250519Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.268389Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.282740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.299931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.315349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.349274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.388022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.396135Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:15:35.450821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45176","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:16:35 up  2:59,  0 user,  load average: 3.84, 3.45, 2.98
	Linux embed-certs-955646 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e62f6bc1f097c21e3482b5659534d25d050cbbe3ad2fc4ee473624d94c2098dd] <==
	I1102 14:15:37.918087       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1102 14:15:37.918462       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1102 14:15:37.918660       1 main.go:148] setting mtu 1500 for CNI 
	I1102 14:15:37.918708       1 main.go:178] kindnetd IP family: "ipv4"
	I1102 14:15:37.918758       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-02T14:15:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1102 14:15:38.120319       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1102 14:15:38.120349       1 controller.go:381] "Waiting for informer caches to sync"
	I1102 14:15:38.120358       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1102 14:15:38.121058       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1102 14:16:08.121414       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1102 14:16:08.121528       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1102 14:16:08.121610       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1102 14:16:08.121681       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1102 14:16:09.320509       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1102 14:16:09.320543       1 metrics.go:72] Registering metrics
	I1102 14:16:09.320614       1 controller.go:711] "Syncing nftables rules"
	I1102 14:16:18.120898       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1102 14:16:18.120950       1 main.go:301] handling current node
	I1102 14:16:28.122769       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1102 14:16:28.122910       1 main.go:301] handling current node
	
	
	==> kube-apiserver [cf67be7dd17d48724284cdf38b21f101a7c4398491b1bd72bc00ab1299492eff] <==
	I1102 14:15:36.218889       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1102 14:15:36.222688       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1102 14:15:36.228077       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1102 14:15:36.228139       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1102 14:15:36.235201       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1102 14:15:36.235221       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1102 14:15:36.235327       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1102 14:15:36.261159       1 cache.go:39] Caches are synced for autoregister controller
	I1102 14:15:36.278902       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1102 14:15:36.291649       1 default_servicecidr_controller.go:111] Starting kubernetes-service-cidr-controller
	I1102 14:15:36.291678       1 shared_informer.go:349] "Waiting for caches to sync" controller="kubernetes-service-cidr-controller"
	I1102 14:15:36.515139       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1102 14:15:36.515220       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	E1102 14:15:36.527576       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1102 14:15:37.003333       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1102 14:15:37.158057       1 controller.go:667] quota admission added evaluator for: namespaces
	I1102 14:15:37.247495       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1102 14:15:37.318289       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1102 14:15:37.386024       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1102 14:15:37.422081       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1102 14:15:37.594073       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.140.138"}
	I1102 14:15:37.641682       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.151.240"}
	I1102 14:15:40.028308       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1102 14:15:40.111504       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1102 14:15:40.211232       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [6eca59e1e67071a7fc5a83a34cccd31a907855a85d0659f4b0148c6382cc8beb] <==
	I1102 14:15:39.608145       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1102 14:15:39.608193       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1102 14:15:39.613335       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1102 14:15:39.613338       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 14:15:39.619498       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1102 14:15:39.622828       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1102 14:15:39.624012       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1102 14:15:39.636091       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1102 14:15:39.636164       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1102 14:15:39.636199       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1102 14:15:39.636213       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1102 14:15:39.636219       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1102 14:15:39.639667       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1102 14:15:39.645905       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1102 14:15:39.649065       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1102 14:15:39.653968       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1102 14:15:39.654740       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1102 14:15:39.654845       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1102 14:15:39.654873       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1102 14:15:39.655105       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1102 14:15:39.718562       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1102 14:15:39.804139       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 14:15:39.804264       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1102 14:15:39.804296       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1102 14:15:39.819368       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [c8dd4cc06305b2c6b07d015128b0d599ee5422ea6ff0b80a3124fbb4256c3cbe] <==
	I1102 14:15:37.906145       1 server_linux.go:53] "Using iptables proxy"
	I1102 14:15:37.996034       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1102 14:15:38.096500       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1102 14:15:38.096534       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1102 14:15:38.096628       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1102 14:15:38.114589       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 14:15:38.114742       1 server_linux.go:132] "Using iptables Proxier"
	I1102 14:15:38.118549       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1102 14:15:38.119081       1 server.go:527] "Version info" version="v1.34.1"
	I1102 14:15:38.119335       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 14:15:38.120908       1 config.go:200] "Starting service config controller"
	I1102 14:15:38.120973       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1102 14:15:38.121017       1 config.go:106] "Starting endpoint slice config controller"
	I1102 14:15:38.121061       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1102 14:15:38.121152       1 config.go:403] "Starting serviceCIDR config controller"
	I1102 14:15:38.121186       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1102 14:15:38.122870       1 config.go:309] "Starting node config controller"
	I1102 14:15:38.123185       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1102 14:15:38.123232       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1102 14:15:38.221155       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1102 14:15:38.221155       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1102 14:15:38.221402       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [b2cda95c0fa73867332aa42c9cd6aad92c60f000d6837089bac2ad63937e9752] <==
	I1102 14:15:33.150931       1 serving.go:386] Generated self-signed cert in-memory
	W1102 14:15:36.023111       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1102 14:15:36.023149       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1102 14:15:36.023161       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1102 14:15:36.023169       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1102 14:15:36.283379       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1102 14:15:36.283416       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 14:15:36.295067       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1102 14:15:36.300289       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 14:15:36.300328       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 14:15:36.300355       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1102 14:15:36.400820       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 02 14:15:40 embed-certs-955646 kubelet[809]: W1102 14:15:40.676443     809 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/30c758ef671a78927180fb97bde8b1a9a094e365398db33e109ee628c96e6553/crio-f448eec3af59809dce2ed4caa01733c011535b4c550762572e14499c48ccd01a WatchSource:0}: Error finding container f448eec3af59809dce2ed4caa01733c011535b4c550762572e14499c48ccd01a: Status 404 returned error can't find the container with id f448eec3af59809dce2ed4caa01733c011535b4c550762572e14499c48ccd01a
	Nov 02 14:15:45 embed-certs-955646 kubelet[809]: I1102 14:15:45.350995     809 scope.go:117] "RemoveContainer" containerID="77ab14028e37bbbf25fad57eabba4438c2f6cec2e5f1f8786c187a5a69487ea6"
	Nov 02 14:15:45 embed-certs-955646 kubelet[809]: I1102 14:15:45.823854     809 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 02 14:15:46 embed-certs-955646 kubelet[809]: I1102 14:15:46.363099     809 scope.go:117] "RemoveContainer" containerID="77ab14028e37bbbf25fad57eabba4438c2f6cec2e5f1f8786c187a5a69487ea6"
	Nov 02 14:15:46 embed-certs-955646 kubelet[809]: I1102 14:15:46.363379     809 scope.go:117] "RemoveContainer" containerID="20cc841cc1483bf8a564bf8cb939fd7cbd42c4f558af3adbc66ab2b60ce82029"
	Nov 02 14:15:46 embed-certs-955646 kubelet[809]: E1102 14:15:46.363514     809 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-d5qv6_kubernetes-dashboard(b9ae9591-80b7-4e44-8568-9b84dcc46139)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d5qv6" podUID="b9ae9591-80b7-4e44-8568-9b84dcc46139"
	Nov 02 14:15:47 embed-certs-955646 kubelet[809]: I1102 14:15:47.369120     809 scope.go:117] "RemoveContainer" containerID="20cc841cc1483bf8a564bf8cb939fd7cbd42c4f558af3adbc66ab2b60ce82029"
	Nov 02 14:15:47 embed-certs-955646 kubelet[809]: E1102 14:15:47.370051     809 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-d5qv6_kubernetes-dashboard(b9ae9591-80b7-4e44-8568-9b84dcc46139)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d5qv6" podUID="b9ae9591-80b7-4e44-8568-9b84dcc46139"
	Nov 02 14:15:50 embed-certs-955646 kubelet[809]: I1102 14:15:50.617583     809 scope.go:117] "RemoveContainer" containerID="20cc841cc1483bf8a564bf8cb939fd7cbd42c4f558af3adbc66ab2b60ce82029"
	Nov 02 14:15:50 embed-certs-955646 kubelet[809]: E1102 14:15:50.617806     809 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-d5qv6_kubernetes-dashboard(b9ae9591-80b7-4e44-8568-9b84dcc46139)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d5qv6" podUID="b9ae9591-80b7-4e44-8568-9b84dcc46139"
	Nov 02 14:16:01 embed-certs-955646 kubelet[809]: I1102 14:16:01.272710     809 scope.go:117] "RemoveContainer" containerID="20cc841cc1483bf8a564bf8cb939fd7cbd42c4f558af3adbc66ab2b60ce82029"
	Nov 02 14:16:01 embed-certs-955646 kubelet[809]: I1102 14:16:01.407707     809 scope.go:117] "RemoveContainer" containerID="20cc841cc1483bf8a564bf8cb939fd7cbd42c4f558af3adbc66ab2b60ce82029"
	Nov 02 14:16:01 embed-certs-955646 kubelet[809]: I1102 14:16:01.407997     809 scope.go:117] "RemoveContainer" containerID="ef426e5a083cea762ab338335868092fbefffe917112753aa8640fc71db0b316"
	Nov 02 14:16:01 embed-certs-955646 kubelet[809]: E1102 14:16:01.408150     809 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-d5qv6_kubernetes-dashboard(b9ae9591-80b7-4e44-8568-9b84dcc46139)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d5qv6" podUID="b9ae9591-80b7-4e44-8568-9b84dcc46139"
	Nov 02 14:16:01 embed-certs-955646 kubelet[809]: I1102 14:16:01.429689     809 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-hp5zz" podStartSLOduration=12.452817982 podStartE2EDuration="21.429648133s" podCreationTimestamp="2025-11-02 14:15:40 +0000 UTC" firstStartedPulling="2025-11-02 14:15:40.679052967 +0000 UTC m=+10.692216333" lastFinishedPulling="2025-11-02 14:15:49.655883118 +0000 UTC m=+19.669046484" observedRunningTime="2025-11-02 14:15:50.393787955 +0000 UTC m=+20.406951345" watchObservedRunningTime="2025-11-02 14:16:01.429648133 +0000 UTC m=+31.442811589"
	Nov 02 14:16:08 embed-certs-955646 kubelet[809]: I1102 14:16:08.426563     809 scope.go:117] "RemoveContainer" containerID="a67c468c1c763f272b0a9c52725437da784062dfc833ddc965b3fbeb8cca238c"
	Nov 02 14:16:10 embed-certs-955646 kubelet[809]: I1102 14:16:10.618660     809 scope.go:117] "RemoveContainer" containerID="ef426e5a083cea762ab338335868092fbefffe917112753aa8640fc71db0b316"
	Nov 02 14:16:10 embed-certs-955646 kubelet[809]: E1102 14:16:10.618834     809 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-d5qv6_kubernetes-dashboard(b9ae9591-80b7-4e44-8568-9b84dcc46139)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d5qv6" podUID="b9ae9591-80b7-4e44-8568-9b84dcc46139"
	Nov 02 14:16:22 embed-certs-955646 kubelet[809]: I1102 14:16:22.272349     809 scope.go:117] "RemoveContainer" containerID="ef426e5a083cea762ab338335868092fbefffe917112753aa8640fc71db0b316"
	Nov 02 14:16:22 embed-certs-955646 kubelet[809]: I1102 14:16:22.462299     809 scope.go:117] "RemoveContainer" containerID="ef426e5a083cea762ab338335868092fbefffe917112753aa8640fc71db0b316"
	Nov 02 14:16:23 embed-certs-955646 kubelet[809]: I1102 14:16:23.466078     809 scope.go:117] "RemoveContainer" containerID="f13f3bd50aa03124042acc9a5370ba88dc79123989a79b756e5c45dc349d6017"
	Nov 02 14:16:23 embed-certs-955646 kubelet[809]: E1102 14:16:23.466237     809 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-d5qv6_kubernetes-dashboard(b9ae9591-80b7-4e44-8568-9b84dcc46139)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-d5qv6" podUID="b9ae9591-80b7-4e44-8568-9b84dcc46139"
	Nov 02 14:16:28 embed-certs-955646 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 02 14:16:28 embed-certs-955646 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 02 14:16:28 embed-certs-955646 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [9bc5d7391685e8a5cd00f7fa23ac16536bf7625564882416c6a722715462fbb0] <==
	2025/11/02 14:15:49 Using namespace: kubernetes-dashboard
	2025/11/02 14:15:49 Using in-cluster config to connect to apiserver
	2025/11/02 14:15:49 Using secret token for csrf signing
	2025/11/02 14:15:49 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/02 14:15:49 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/02 14:15:49 Successful initial request to the apiserver, version: v1.34.1
	2025/11/02 14:15:49 Generating JWE encryption key
	2025/11/02 14:15:49 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/02 14:15:49 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/02 14:15:50 Initializing JWE encryption key from synchronized object
	2025/11/02 14:15:50 Creating in-cluster Sidecar client
	2025/11/02 14:15:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/02 14:15:50 Serving insecurely on HTTP port: 9090
	2025/11/02 14:16:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/02 14:15:49 Starting overwatch
	
	
	==> storage-provisioner [6da0839e1d2999ea3f428a5acfe5a837f84636d51064907eb29cfe7f2701f8b4] <==
	I1102 14:16:08.530110       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1102 14:16:08.533851       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1102 14:16:08.546115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:16:12.011907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:16:16.272515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:16:19.870743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:16:22.925089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:16:25.947573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:16:25.955318       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1102 14:16:25.955500       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1102 14:16:25.955702       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-955646_d83474a9-2881-43da-a511-7c1321de2e68!
	I1102 14:16:25.956463       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"192779df-44b5-4e02-8171-660f368cbc29", APIVersion:"v1", ResourceVersion:"690", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-955646_d83474a9-2881-43da-a511-7c1321de2e68 became leader
	W1102 14:16:25.959068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:16:25.965631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1102 14:16:26.056562       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-955646_d83474a9-2881-43da-a511-7c1321de2e68!
	W1102 14:16:27.972801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:16:27.982365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:16:29.986520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:16:29.996064       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:16:32.000055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:16:32.008954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:16:34.019764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:16:34.028779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:16:36.033056       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:16:36.043715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [a67c468c1c763f272b0a9c52725437da784062dfc833ddc965b3fbeb8cca238c] <==
	I1102 14:15:37.714097       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1102 14:16:07.726868       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-955646 -n embed-certs-955646
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-955646 -n embed-certs-955646: exit status 2 (533.169559ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-955646 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (8.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-352233 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-352233 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (319.550871ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T14:17:25Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-352233 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-352233
helpers_test.go:243: (dbg) docker inspect newest-cni-352233:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3dedeeb54f374fa01ce72bc2b41c87fc231415be9be044642fbcda8c122ba0ff",
	        "Created": "2025-11-02T14:16:47.051560266Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 500131,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-02T14:16:47.127024204Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/3dedeeb54f374fa01ce72bc2b41c87fc231415be9be044642fbcda8c122ba0ff/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3dedeeb54f374fa01ce72bc2b41c87fc231415be9be044642fbcda8c122ba0ff/hostname",
	        "HostsPath": "/var/lib/docker/containers/3dedeeb54f374fa01ce72bc2b41c87fc231415be9be044642fbcda8c122ba0ff/hosts",
	        "LogPath": "/var/lib/docker/containers/3dedeeb54f374fa01ce72bc2b41c87fc231415be9be044642fbcda8c122ba0ff/3dedeeb54f374fa01ce72bc2b41c87fc231415be9be044642fbcda8c122ba0ff-json.log",
	        "Name": "/newest-cni-352233",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-352233:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-352233",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3dedeeb54f374fa01ce72bc2b41c87fc231415be9be044642fbcda8c122ba0ff",
	                "LowerDir": "/var/lib/docker/overlay2/6a465501cbe8e86cbcc859ba8574cb7d3d77365eeff8339b92edb281a1936040-init/diff:/var/lib/docker/overlay2/53058afe6acd7391f639f551e07f446f6a39a991b699aea18507cb21a1652e97/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6a465501cbe8e86cbcc859ba8574cb7d3d77365eeff8339b92edb281a1936040/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6a465501cbe8e86cbcc859ba8574cb7d3d77365eeff8339b92edb281a1936040/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6a465501cbe8e86cbcc859ba8574cb7d3d77365eeff8339b92edb281a1936040/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-352233",
	                "Source": "/var/lib/docker/volumes/newest-cni-352233/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-352233",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-352233",
	                "name.minikube.sigs.k8s.io": "newest-cni-352233",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "28f5f33e3b808f18976ba1e3f8d540a8d88483222ca5f420a013acabe904f5db",
	            "SandboxKey": "/var/run/docker/netns/28f5f33e3b80",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-352233": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0e:fe:6c:d6:0f:69",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f61df99b10f05d6b77aff7bd79b1aba98b765bd7b0b260056e12ed71f894662d",
	                    "EndpointID": "b4e774c49189553e7f85519dc41f53cdb29b8a89cc4dfcee693727b423fc676c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-352233",
	                        "3dedeeb54f37"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-352233 -n newest-cni-352233
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-352233 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-352233 logs -n 25: (1.097577988s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p no-preload-150469 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │ 02 Nov 25 14:13 UTC │
	│ addons  │ enable dashboard -p no-preload-150469 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │ 02 Nov 25 14:13 UTC │
	│ start   │ -p no-preload-150469 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │ 02 Nov 25 14:14 UTC │
	│ delete  │ -p cert-expiration-114321                                                                                                                                                                                                                     │ cert-expiration-114321       │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │ 02 Nov 25 14:13 UTC │
	│ start   │ -p embed-certs-955646 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:13 UTC │ 02 Nov 25 14:14 UTC │
	│ image   │ no-preload-150469 image list --format=json                                                                                                                                                                                                    │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │ 02 Nov 25 14:14 UTC │
	│ pause   │ -p no-preload-150469 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │                     │
	│ delete  │ -p no-preload-150469                                                                                                                                                                                                                          │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │ 02 Nov 25 14:14 UTC │
	│ delete  │ -p no-preload-150469                                                                                                                                                                                                                          │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │ 02 Nov 25 14:14 UTC │
	│ delete  │ -p disable-driver-mounts-720030                                                                                                                                                                                                               │ disable-driver-mounts-720030 │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │ 02 Nov 25 14:14 UTC │
	│ start   │ -p default-k8s-diff-port-786183 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │ 02 Nov 25 14:15 UTC │
	│ addons  │ enable metrics-server -p embed-certs-955646 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:15 UTC │                     │
	│ stop    │ -p embed-certs-955646 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:15 UTC │ 02 Nov 25 14:15 UTC │
	│ addons  │ enable dashboard -p embed-certs-955646 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:15 UTC │ 02 Nov 25 14:15 UTC │
	│ start   │ -p embed-certs-955646 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:15 UTC │ 02 Nov 25 14:16 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-786183 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-786183 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │ 02 Nov 25 14:16 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-786183 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │ 02 Nov 25 14:16 UTC │
	│ start   │ -p default-k8s-diff-port-786183 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │ 02 Nov 25 14:17 UTC │
	│ image   │ embed-certs-955646 image list --format=json                                                                                                                                                                                                   │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │ 02 Nov 25 14:16 UTC │
	│ pause   │ -p embed-certs-955646 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │                     │
	│ delete  │ -p embed-certs-955646                                                                                                                                                                                                                         │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │ 02 Nov 25 14:16 UTC │
	│ delete  │ -p embed-certs-955646                                                                                                                                                                                                                         │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │ 02 Nov 25 14:16 UTC │
	│ start   │ -p newest-cni-352233 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-352233            │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │ 02 Nov 25 14:17 UTC │
	│ addons  │ enable metrics-server -p newest-cni-352233 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-352233            │ jenkins │ v1.37.0 │ 02 Nov 25 14:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 14:16:40
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 14:16:40.974265  499526 out.go:360] Setting OutFile to fd 1 ...
	I1102 14:16:40.977404  499526 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:16:40.977456  499526 out.go:374] Setting ErrFile to fd 2...
	I1102 14:16:40.977476  499526 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:16:40.977772  499526 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 14:16:40.978245  499526 out.go:368] Setting JSON to false
	I1102 14:16:40.979325  499526 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10753,"bootTime":1762082248,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1102 14:16:40.979428  499526 start.go:143] virtualization:  
	I1102 14:16:40.984513  499526 out.go:179] * [newest-cni-352233] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1102 14:16:40.987692  499526 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 14:16:40.987762  499526 notify.go:221] Checking for updates...
	I1102 14:16:40.994171  499526 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 14:16:40.997152  499526 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-293314/kubeconfig
	I1102 14:16:41.000072  499526 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-293314/.minikube
	I1102 14:16:41.003612  499526 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1102 14:16:41.006551  499526 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 14:16:41.009986  499526 config.go:182] Loaded profile config "default-k8s-diff-port-786183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:16:41.010112  499526 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 14:16:41.062957  499526 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1102 14:16:41.063088  499526 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:16:41.179373  499526 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-02 14:16:41.163752565 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:16:41.179474  499526 docker.go:319] overlay module found
	I1102 14:16:41.182761  499526 out.go:179] * Using the docker driver based on user configuration
	I1102 14:16:41.185509  499526 start.go:309] selected driver: docker
	I1102 14:16:41.185527  499526 start.go:930] validating driver "docker" against <nil>
	I1102 14:16:41.185546  499526 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 14:16:41.186219  499526 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:16:41.281012  499526 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-02 14:16:41.271354912 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:16:41.281161  499526 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1102 14:16:41.281186  499526 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1102 14:16:41.281413  499526 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1102 14:16:41.284434  499526 out.go:179] * Using Docker driver with root privileges
	I1102 14:16:41.287250  499526 cni.go:84] Creating CNI manager for ""
	I1102 14:16:41.287324  499526 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 14:16:41.287334  499526 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1102 14:16:41.287415  499526 start.go:353] cluster config:
	{Name:newest-cni-352233 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-352233 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 14:16:41.290470  499526 out.go:179] * Starting "newest-cni-352233" primary control-plane node in "newest-cni-352233" cluster
	I1102 14:16:41.293229  499526 cache.go:124] Beginning downloading kic base image for docker with crio
	I1102 14:16:41.296135  499526 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1102 14:16:41.298940  499526 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 14:16:41.299006  499526 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1102 14:16:41.299016  499526 cache.go:59] Caching tarball of preloaded images
	I1102 14:16:41.299112  499526 preload.go:233] Found /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1102 14:16:41.299122  499526 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1102 14:16:41.299227  499526 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/newest-cni-352233/config.json ...
	I1102 14:16:41.299248  499526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/newest-cni-352233/config.json: {Name:mk2b9e1d0f54e52d912466ab24a12003ffd3729f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:16:41.299407  499526 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1102 14:16:41.321189  499526 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1102 14:16:41.321208  499526 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1102 14:16:41.321225  499526 cache.go:233] Successfully downloaded all kic artifacts
	I1102 14:16:41.321246  499526 start.go:360] acquireMachinesLock for newest-cni-352233: {Name:mk656133c677274089939931d0ae5b5b59bd0afb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 14:16:41.321339  499526 start.go:364] duration metric: took 77.72µs to acquireMachinesLock for "newest-cni-352233"
	I1102 14:16:41.321364  499526 start.go:93] Provisioning new machine with config: &{Name:newest-cni-352233 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-352233 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 14:16:41.321431  499526 start.go:125] createHost starting for "" (driver="docker")
	I1102 14:16:38.411615  496485 node_ready.go:49] node "default-k8s-diff-port-786183" is "Ready"
	I1102 14:16:38.411647  496485 node_ready.go:38] duration metric: took 6.76736527s for node "default-k8s-diff-port-786183" to be "Ready" ...
	I1102 14:16:38.411661  496485 api_server.go:52] waiting for apiserver process to appear ...
	I1102 14:16:38.411723  496485 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 14:16:41.270734  496485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.642391245s)
	I1102 14:16:41.270793  496485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.585367561s)
	I1102 14:16:41.869301  496485 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (9.633957483s)
	I1102 14:16:41.869520  496485 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.457780638s)
	I1102 14:16:41.869542  496485 api_server.go:72] duration metric: took 10.706243393s to wait for apiserver process to appear ...
	I1102 14:16:41.869555  496485 api_server.go:88] waiting for apiserver healthz status ...
	I1102 14:16:41.869573  496485 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1102 14:16:41.874710  496485 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-786183 addons enable metrics-server
	
	I1102 14:16:41.877672  496485 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1102 14:16:41.324785  499526 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1102 14:16:41.325019  499526 start.go:159] libmachine.API.Create for "newest-cni-352233" (driver="docker")
	I1102 14:16:41.325050  499526 client.go:173] LocalClient.Create starting
	I1102 14:16:41.325135  499526 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem
	I1102 14:16:41.325168  499526 main.go:143] libmachine: Decoding PEM data...
	I1102 14:16:41.325182  499526 main.go:143] libmachine: Parsing certificate...
	I1102 14:16:41.325232  499526 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem
	I1102 14:16:41.325250  499526 main.go:143] libmachine: Decoding PEM data...
	I1102 14:16:41.325259  499526 main.go:143] libmachine: Parsing certificate...
	I1102 14:16:41.325613  499526 cli_runner.go:164] Run: docker network inspect newest-cni-352233 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1102 14:16:41.348558  499526 cli_runner.go:211] docker network inspect newest-cni-352233 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1102 14:16:41.348640  499526 network_create.go:284] running [docker network inspect newest-cni-352233] to gather additional debugging logs...
	I1102 14:16:41.348656  499526 cli_runner.go:164] Run: docker network inspect newest-cni-352233
	W1102 14:16:41.373734  499526 cli_runner.go:211] docker network inspect newest-cni-352233 returned with exit code 1
	I1102 14:16:41.373761  499526 network_create.go:287] error running [docker network inspect newest-cni-352233]: docker network inspect newest-cni-352233: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-352233 not found
	I1102 14:16:41.373773  499526 network_create.go:289] output of [docker network inspect newest-cni-352233]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-352233 not found
	
	** /stderr **
	I1102 14:16:41.373866  499526 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 14:16:41.401459  499526 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ddf319108ac9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:66:f7:2d:49:67:ff} reservation:<nil>}
	I1102 14:16:41.401834  499526 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-30b945568040 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:b6:b2:b0:cb:49:d7} reservation:<nil>}
	I1102 14:16:41.402088  499526 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d23a3a2e266d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fa:42:95:8e:ae:52} reservation:<nil>}
	I1102 14:16:41.402386  499526 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-eb820b490718 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:26:e9:01:37:44:3a} reservation:<nil>}
	I1102 14:16:41.402886  499526 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a38ab0}
	I1102 14:16:41.402907  499526 network_create.go:124] attempt to create docker network newest-cni-352233 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1102 14:16:41.402982  499526 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-352233 newest-cni-352233
	I1102 14:16:41.466239  499526 network_create.go:108] docker network newest-cni-352233 192.168.85.0/24 created
	I1102 14:16:41.466272  499526 kic.go:121] calculated static IP "192.168.85.2" for the "newest-cni-352233" container
	I1102 14:16:41.466373  499526 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1102 14:16:41.516104  499526 cli_runner.go:164] Run: docker volume create newest-cni-352233 --label name.minikube.sigs.k8s.io=newest-cni-352233 --label created_by.minikube.sigs.k8s.io=true
	I1102 14:16:41.540089  499526 oci.go:103] Successfully created a docker volume newest-cni-352233
	I1102 14:16:41.540171  499526 cli_runner.go:164] Run: docker run --rm --name newest-cni-352233-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-352233 --entrypoint /usr/bin/test -v newest-cni-352233:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1102 14:16:42.283716  499526 oci.go:107] Successfully prepared a docker volume newest-cni-352233
	I1102 14:16:42.283765  499526 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 14:16:42.283786  499526 kic.go:194] Starting extracting preloaded images to volume ...
	I1102 14:16:42.283871  499526 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-352233:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1102 14:16:41.880698  496485 addons.go:515] duration metric: took 10.717327847s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1102 14:16:41.882684  496485 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 14:16:41.882705  496485 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1102 14:16:42.370362  496485 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1102 14:16:42.383548  496485 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1102 14:16:42.385329  496485 api_server.go:141] control plane version: v1.34.1
	I1102 14:16:42.385354  496485 api_server.go:131] duration metric: took 515.792503ms to wait for apiserver health ...
	I1102 14:16:42.385364  496485 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 14:16:42.389641  496485 system_pods.go:59] 8 kube-system pods found
	I1102 14:16:42.390772  496485 system_pods.go:61] "coredns-66bc5c9577-lwp97" [cd5d24d1-8139-448c-9016-c89db9315328] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 14:16:42.390807  496485 system_pods.go:61] "etcd-default-k8s-diff-port-786183" [20f2055e-9a44-4af9-ac93-0de08a0929dd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 14:16:42.390836  496485 system_pods.go:61] "kindnet-pd47j" [2faa4679-6556-4e51-a2a3-88275ddc1fff] Running
	I1102 14:16:42.390871  496485 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-786183" [3820bf6d-1505-48a7-b001-f8d7a0b87b6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 14:16:42.390898  496485 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-786183" [cfc3d639-0d89-4860-9f16-633cf0079a2b] Running
	I1102 14:16:42.390932  496485 system_pods.go:61] "kube-proxy-jlf8q" [ffabcc04-6bec-42eb-a759-aeea07668e18] Running
	I1102 14:16:42.390957  496485 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-786183" [a75b2a9e-b854-4624-ac22-8fb38c2173dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 14:16:42.390979  496485 system_pods.go:61] "storage-provisioner" [d79c0f13-8bac-4de0-9847-059f608dbabb] Running
	I1102 14:16:42.391007  496485 system_pods.go:74] duration metric: took 5.636422ms to wait for pod list to return data ...
	I1102 14:16:42.391031  496485 default_sa.go:34] waiting for default service account to be created ...
	I1102 14:16:42.395134  496485 default_sa.go:45] found service account: "default"
	I1102 14:16:42.395157  496485 default_sa.go:55] duration metric: took 4.104608ms for default service account to be created ...
	I1102 14:16:42.395167  496485 system_pods.go:116] waiting for k8s-apps to be running ...
	I1102 14:16:42.403149  496485 system_pods.go:86] 8 kube-system pods found
	I1102 14:16:42.403186  496485 system_pods.go:89] "coredns-66bc5c9577-lwp97" [cd5d24d1-8139-448c-9016-c89db9315328] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1102 14:16:42.403196  496485 system_pods.go:89] "etcd-default-k8s-diff-port-786183" [20f2055e-9a44-4af9-ac93-0de08a0929dd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 14:16:42.403230  496485 system_pods.go:89] "kindnet-pd47j" [2faa4679-6556-4e51-a2a3-88275ddc1fff] Running
	I1102 14:16:42.403250  496485 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-786183" [3820bf6d-1505-48a7-b001-f8d7a0b87b6a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 14:16:42.403257  496485 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-786183" [cfc3d639-0d89-4860-9f16-633cf0079a2b] Running
	I1102 14:16:42.403267  496485 system_pods.go:89] "kube-proxy-jlf8q" [ffabcc04-6bec-42eb-a759-aeea07668e18] Running
	I1102 14:16:42.403273  496485 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-786183" [a75b2a9e-b854-4624-ac22-8fb38c2173dc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 14:16:42.403284  496485 system_pods.go:89] "storage-provisioner" [d79c0f13-8bac-4de0-9847-059f608dbabb] Running
	I1102 14:16:42.403317  496485 system_pods.go:126] duration metric: took 8.143386ms to wait for k8s-apps to be running ...
	I1102 14:16:42.403338  496485 system_svc.go:44] waiting for kubelet service to be running ....
	I1102 14:16:42.403419  496485 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 14:16:42.418987  496485 system_svc.go:56] duration metric: took 15.638314ms WaitForService to wait for kubelet
	I1102 14:16:42.419019  496485 kubeadm.go:587] duration metric: took 11.255718497s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 14:16:42.419038  496485 node_conditions.go:102] verifying NodePressure condition ...
	I1102 14:16:42.423091  496485 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1102 14:16:42.423127  496485 node_conditions.go:123] node cpu capacity is 2
	I1102 14:16:42.423142  496485 node_conditions.go:105] duration metric: took 4.062532ms to run NodePressure ...
	I1102 14:16:42.423175  496485 start.go:242] waiting for startup goroutines ...
	I1102 14:16:42.423193  496485 start.go:247] waiting for cluster config update ...
	I1102 14:16:42.423206  496485 start.go:256] writing updated cluster config ...
	I1102 14:16:42.423528  496485 ssh_runner.go:195] Run: rm -f paused
	I1102 14:16:42.435270  496485 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 14:16:42.439164  496485 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lwp97" in "kube-system" namespace to be "Ready" or be gone ...
	W1102 14:16:44.447960  496485 pod_ready.go:104] pod "coredns-66bc5c9577-lwp97" is not "Ready", error: <nil>
	I1102 14:16:46.912703  499526 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v newest-cni-352233:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.62879636s)
	I1102 14:16:46.912732  499526 kic.go:203] duration metric: took 4.628943158s to extract preloaded images to volume ...
	W1102 14:16:46.912878  499526 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1102 14:16:46.912985  499526 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1102 14:16:47.024738  499526 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-352233 --name newest-cni-352233 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-352233 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-352233 --network newest-cni-352233 --ip 192.168.85.2 --volume newest-cni-352233:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1102 14:16:47.448990  499526 cli_runner.go:164] Run: docker container inspect newest-cni-352233 --format={{.State.Running}}
	I1102 14:16:47.468696  499526 cli_runner.go:164] Run: docker container inspect newest-cni-352233 --format={{.State.Status}}
	I1102 14:16:47.496072  499526 cli_runner.go:164] Run: docker exec newest-cni-352233 stat /var/lib/dpkg/alternatives/iptables
	I1102 14:16:47.552645  499526 oci.go:144] the created container "newest-cni-352233" has a running status.
	I1102 14:16:47.552679  499526 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21808-293314/.minikube/machines/newest-cni-352233/id_rsa...
	I1102 14:16:48.318818  499526 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21808-293314/.minikube/machines/newest-cni-352233/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1102 14:16:48.342906  499526 cli_runner.go:164] Run: docker container inspect newest-cni-352233 --format={{.State.Status}}
	I1102 14:16:48.364593  499526 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1102 14:16:48.364612  499526 kic_runner.go:114] Args: [docker exec --privileged newest-cni-352233 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1102 14:16:48.422150  499526 cli_runner.go:164] Run: docker container inspect newest-cni-352233 --format={{.State.Status}}
	I1102 14:16:48.448804  499526 machine.go:94] provisionDockerMachine start ...
	I1102 14:16:48.448893  499526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-352233
	I1102 14:16:48.471970  499526 main.go:143] libmachine: Using SSH client type: native
	I1102 14:16:48.472304  499526 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33461 <nil> <nil>}
	I1102 14:16:48.472314  499526 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 14:16:48.473062  499526 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34788->127.0.0.1:33461: read: connection reset by peer
	W1102 14:16:46.966029  496485 pod_ready.go:104] pod "coredns-66bc5c9577-lwp97" is not "Ready", error: <nil>
	W1102 14:16:49.446912  496485 pod_ready.go:104] pod "coredns-66bc5c9577-lwp97" is not "Ready", error: <nil>
	I1102 14:16:51.642482  499526 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-352233
	
	I1102 14:16:51.642509  499526 ubuntu.go:182] provisioning hostname "newest-cni-352233"
	I1102 14:16:51.642575  499526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-352233
	I1102 14:16:51.672710  499526 main.go:143] libmachine: Using SSH client type: native
	I1102 14:16:51.673027  499526 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33461 <nil> <nil>}
	I1102 14:16:51.673045  499526 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-352233 && echo "newest-cni-352233" | sudo tee /etc/hostname
	I1102 14:16:51.854232  499526 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-352233
	
	I1102 14:16:51.854312  499526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-352233
	I1102 14:16:51.873910  499526 main.go:143] libmachine: Using SSH client type: native
	I1102 14:16:51.874220  499526 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33461 <nil> <nil>}
	I1102 14:16:51.874237  499526 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-352233' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-352233/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-352233' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 14:16:52.032110  499526 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 14:16:52.032135  499526 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-293314/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-293314/.minikube}
	I1102 14:16:52.032154  499526 ubuntu.go:190] setting up certificates
	I1102 14:16:52.032164  499526 provision.go:84] configureAuth start
	I1102 14:16:52.032239  499526 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-352233
	I1102 14:16:52.058486  499526 provision.go:143] copyHostCerts
	I1102 14:16:52.058556  499526 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem, removing ...
	I1102 14:16:52.058575  499526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem
	I1102 14:16:52.058670  499526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem (1082 bytes)
	I1102 14:16:52.058771  499526 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem, removing ...
	I1102 14:16:52.058782  499526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem
	I1102 14:16:52.058811  499526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem (1123 bytes)
	I1102 14:16:52.058871  499526 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem, removing ...
	I1102 14:16:52.058878  499526 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem
	I1102 14:16:52.058902  499526 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem (1675 bytes)
	I1102 14:16:52.058954  499526 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem org=jenkins.newest-cni-352233 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-352233]
	I1102 14:16:52.959467  499526 provision.go:177] copyRemoteCerts
	I1102 14:16:52.959544  499526 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 14:16:52.959592  499526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-352233
	I1102 14:16:52.977134  499526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/newest-cni-352233/id_rsa Username:docker}
	I1102 14:16:53.083876  499526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1102 14:16:53.116849  499526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1102 14:16:53.150095  499526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1102 14:16:53.183455  499526 provision.go:87] duration metric: took 1.151278345s to configureAuth
	I1102 14:16:53.183479  499526 ubuntu.go:206] setting minikube options for container-runtime
	I1102 14:16:53.183665  499526 config.go:182] Loaded profile config "newest-cni-352233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:16:53.183765  499526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-352233
	I1102 14:16:53.205534  499526 main.go:143] libmachine: Using SSH client type: native
	I1102 14:16:53.206066  499526 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33461 <nil> <nil>}
	I1102 14:16:53.206088  499526 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1102 14:16:53.543325  499526 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1102 14:16:53.543351  499526 machine.go:97] duration metric: took 5.094528046s to provisionDockerMachine
	I1102 14:16:53.543362  499526 client.go:176] duration metric: took 12.218306373s to LocalClient.Create
	I1102 14:16:53.543376  499526 start.go:167] duration metric: took 12.218359593s to libmachine.API.Create "newest-cni-352233"
	I1102 14:16:53.543383  499526 start.go:293] postStartSetup for "newest-cni-352233" (driver="docker")
	I1102 14:16:53.543394  499526 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1102 14:16:53.543460  499526 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1102 14:16:53.543505  499526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-352233
	I1102 14:16:53.579475  499526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/newest-cni-352233/id_rsa Username:docker}
	I1102 14:16:53.691661  499526 ssh_runner.go:195] Run: cat /etc/os-release
	I1102 14:16:53.695541  499526 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1102 14:16:53.695625  499526 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1102 14:16:53.695654  499526 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-293314/.minikube/addons for local assets ...
	I1102 14:16:53.695743  499526 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-293314/.minikube/files for local assets ...
	I1102 14:16:53.695870  499526 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem -> 2951742.pem in /etc/ssl/certs
	I1102 14:16:53.696020  499526 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1102 14:16:53.704784  499526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem --> /etc/ssl/certs/2951742.pem (1708 bytes)
	I1102 14:16:53.724103  499526 start.go:296] duration metric: took 180.688002ms for postStartSetup
	I1102 14:16:53.724527  499526 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-352233
	I1102 14:16:53.750968  499526 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/newest-cni-352233/config.json ...
	I1102 14:16:53.751256  499526 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 14:16:53.751300  499526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-352233
	I1102 14:16:53.772150  499526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/newest-cni-352233/id_rsa Username:docker}
	I1102 14:16:53.875943  499526 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1102 14:16:53.881909  499526 start.go:128] duration metric: took 12.560462241s to createHost
	I1102 14:16:53.881938  499526 start.go:83] releasing machines lock for "newest-cni-352233", held for 12.560591046s
	I1102 14:16:53.882019  499526 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-352233
	I1102 14:16:53.912265  499526 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem (1338 bytes)
	W1102 14:16:53.912320  499526 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174_empty.pem, impossibly tiny 0 bytes
	I1102 14:16:53.912329  499526 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem (1679 bytes)
	I1102 14:16:53.912352  499526 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem (1082 bytes)
	I1102 14:16:53.912374  499526 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem (1123 bytes)
	I1102 14:16:53.912401  499526 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem (1675 bytes)
	I1102 14:16:53.912446  499526 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem (1708 bytes)
	I1102 14:16:53.912506  499526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem --> /usr/share/ca-certificates/2951742.pem (1708 bytes)
	I1102 14:16:53.912574  499526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-352233
	I1102 14:16:53.931335  499526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/newest-cni-352233/id_rsa Username:docker}
	I1102 14:16:54.055767  499526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 14:16:54.076612  499526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem --> /usr/share/ca-certificates/295174.pem (1338 bytes)
	I1102 14:16:54.097274  499526 ssh_runner.go:195] Run: openssl version
	I1102 14:16:54.104469  499526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951742.pem && ln -fs /usr/share/ca-certificates/2951742.pem /etc/ssl/certs/2951742.pem"
	I1102 14:16:54.114511  499526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951742.pem
	I1102 14:16:54.119123  499526 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 13:20 /usr/share/ca-certificates/2951742.pem
	I1102 14:16:54.119188  499526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951742.pem
	I1102 14:16:54.162848  499526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951742.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 14:16:54.173026  499526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 14:16:54.183418  499526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:16:54.188102  499526 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 13:13 /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:16:54.188171  499526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:16:54.231862  499526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 14:16:54.241458  499526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295174.pem && ln -fs /usr/share/ca-certificates/295174.pem /etc/ssl/certs/295174.pem"
	I1102 14:16:54.250641  499526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295174.pem
	I1102 14:16:54.255428  499526 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 13:20 /usr/share/ca-certificates/295174.pem
	I1102 14:16:54.255502  499526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295174.pem
	I1102 14:16:54.299759  499526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295174.pem /etc/ssl/certs/51391683.0"
	I1102 14:16:54.314673  499526 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1102 14:16:54.322965  499526 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1102 14:16:54.327284  499526 ssh_runner.go:195] Run: cat /version.json
	I1102 14:16:54.327357  499526 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1102 14:16:54.428656  499526 ssh_runner.go:195] Run: systemctl --version
	I1102 14:16:54.436508  499526 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1102 14:16:54.488583  499526 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1102 14:16:54.495413  499526 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1102 14:16:54.495490  499526 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1102 14:16:54.541436  499526 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1102 14:16:54.541464  499526 start.go:496] detecting cgroup driver to use...
	I1102 14:16:54.541513  499526 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1102 14:16:54.541577  499526 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1102 14:16:54.560202  499526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1102 14:16:54.576006  499526 docker.go:218] disabling cri-docker service (if available) ...
	I1102 14:16:54.576079  499526 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1102 14:16:54.594771  499526 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1102 14:16:54.614230  499526 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1102 14:16:54.779835  499526 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1102 14:16:54.975076  499526 docker.go:234] disabling docker service ...
	I1102 14:16:54.975145  499526 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1102 14:16:55.013160  499526 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1102 14:16:55.038443  499526 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1102 14:16:55.217408  499526 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1102 14:16:55.377445  499526 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1102 14:16:55.393059  499526 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1102 14:16:55.408672  499526 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1102 14:16:55.408797  499526 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:16:55.420624  499526 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1102 14:16:55.420750  499526 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:16:55.429875  499526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:16:55.438933  499526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:16:55.451344  499526 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1102 14:16:55.460857  499526 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:16:55.470057  499526 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:16:55.483987  499526 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:16:55.494413  499526 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1102 14:16:55.503935  499526 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1102 14:16:55.513519  499526 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 14:16:55.663968  499526 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1102 14:16:56.235030  499526 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1102 14:16:56.235191  499526 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1102 14:16:56.243327  499526 start.go:564] Will wait 60s for crictl version
	I1102 14:16:56.243441  499526 ssh_runner.go:195] Run: which crictl
	I1102 14:16:56.247110  499526 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1102 14:16:56.291375  499526 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1102 14:16:56.291462  499526 ssh_runner.go:195] Run: crio --version
	I1102 14:16:56.343940  499526 ssh_runner.go:195] Run: crio --version
	I1102 14:16:56.394018  499526 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1102 14:16:56.396960  499526 cli_runner.go:164] Run: docker network inspect newest-cni-352233 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 14:16:56.417129  499526 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1102 14:16:56.421011  499526 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 14:16:56.438514  499526 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1102 14:16:51.949357  496485 pod_ready.go:104] pod "coredns-66bc5c9577-lwp97" is not "Ready", error: <nil>
	W1102 14:16:53.952475  496485 pod_ready.go:104] pod "coredns-66bc5c9577-lwp97" is not "Ready", error: <nil>
	W1102 14:16:56.451116  496485 pod_ready.go:104] pod "coredns-66bc5c9577-lwp97" is not "Ready", error: <nil>
	I1102 14:16:56.441524  499526 kubeadm.go:884] updating cluster {Name:newest-cni-352233 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-352233 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1102 14:16:56.441662  499526 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 14:16:56.441736  499526 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 14:16:56.494082  499526 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 14:16:56.494102  499526 crio.go:433] Images already preloaded, skipping extraction
	I1102 14:16:56.494161  499526 ssh_runner.go:195] Run: sudo crictl images --output json
	I1102 14:16:56.534335  499526 crio.go:514] all images are preloaded for cri-o runtime.
	I1102 14:16:56.534355  499526 cache_images.go:86] Images are preloaded, skipping loading
	I1102 14:16:56.534363  499526 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1102 14:16:56.534474  499526 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-352233 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-352233 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1102 14:16:56.534555  499526 ssh_runner.go:195] Run: crio config
	I1102 14:16:56.656929  499526 cni.go:84] Creating CNI manager for ""
	I1102 14:16:56.656954  499526 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 14:16:56.656973  499526 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1102 14:16:56.656996  499526 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-352233 NodeName:newest-cni-352233 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1102 14:16:56.657164  499526 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-352233"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1102 14:16:56.657251  499526 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1102 14:16:56.665624  499526 binaries.go:44] Found k8s binaries, skipping transfer
	I1102 14:16:56.665694  499526 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1102 14:16:56.673905  499526 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1102 14:16:56.688496  499526 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1102 14:16:56.701624  499526 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1102 14:16:56.714546  499526 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1102 14:16:56.718807  499526 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1102 14:16:56.728714  499526 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 14:16:56.878942  499526 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 14:16:56.896547  499526 certs.go:69] Setting up /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/newest-cni-352233 for IP: 192.168.85.2
	I1102 14:16:56.896569  499526 certs.go:195] generating shared ca certs ...
	I1102 14:16:56.896585  499526 certs.go:227] acquiring lock for ca certs: {Name:mkead50075949a3cdc798f9c0149a2bc2638cbbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:16:56.896719  499526 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21808-293314/.minikube/ca.key
	I1102 14:16:56.896769  499526 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.key
	I1102 14:16:56.896781  499526 certs.go:257] generating profile certs ...
	I1102 14:16:56.896839  499526 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/newest-cni-352233/client.key
	I1102 14:16:56.896855  499526 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/newest-cni-352233/client.crt with IP's: []
	I1102 14:16:57.764093  499526 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/newest-cni-352233/client.crt ...
	I1102 14:16:57.764125  499526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/newest-cni-352233/client.crt: {Name:mk2a0508bdc1bb3daa8f78fb2d3d2dfd182f7c1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:16:57.764316  499526 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/newest-cni-352233/client.key ...
	I1102 14:16:57.764329  499526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/newest-cni-352233/client.key: {Name:mk852f4d5e484409faebfd8d6b234a989f3c5992 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:16:57.764430  499526 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/newest-cni-352233/apiserver.key.593011b6
	I1102 14:16:57.764448  499526 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/newest-cni-352233/apiserver.crt.593011b6 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1102 14:16:58.008996  499526 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/newest-cni-352233/apiserver.crt.593011b6 ...
	I1102 14:16:58.009036  499526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/newest-cni-352233/apiserver.crt.593011b6: {Name:mk9924cbc8f66df667e92aa65c3ae2c9af0544ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:16:58.009231  499526 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/newest-cni-352233/apiserver.key.593011b6 ...
	I1102 14:16:58.009260  499526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/newest-cni-352233/apiserver.key.593011b6: {Name:mk4695a45c4a7d27a921fb6a2f5b579018963cef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:16:58.009352  499526 certs.go:382] copying /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/newest-cni-352233/apiserver.crt.593011b6 -> /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/newest-cni-352233/apiserver.crt
	I1102 14:16:58.009464  499526 certs.go:386] copying /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/newest-cni-352233/apiserver.key.593011b6 -> /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/newest-cni-352233/apiserver.key
	I1102 14:16:58.009534  499526 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/newest-cni-352233/proxy-client.key
	I1102 14:16:58.009558  499526 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/newest-cni-352233/proxy-client.crt with IP's: []
	I1102 14:16:58.622309  499526 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/newest-cni-352233/proxy-client.crt ...
	I1102 14:16:58.622342  499526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/newest-cni-352233/proxy-client.crt: {Name:mkb58c516be28a4bcec15a9ad1c0a781025f5458 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:16:58.622537  499526 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/newest-cni-352233/proxy-client.key ...
	I1102 14:16:58.622556  499526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/newest-cni-352233/proxy-client.key: {Name:mkd73b1485d39edf0d94727aa1febcaa35a92be7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:16:58.622759  499526 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem (1338 bytes)
	W1102 14:16:58.622804  499526 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174_empty.pem, impossibly tiny 0 bytes
	I1102 14:16:58.622818  499526 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem (1679 bytes)
	I1102 14:16:58.622843  499526 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem (1082 bytes)
	I1102 14:16:58.622868  499526 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem (1123 bytes)
	I1102 14:16:58.622892  499526 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem (1675 bytes)
	I1102 14:16:58.622938  499526 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem (1708 bytes)
	I1102 14:16:58.623498  499526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1102 14:16:58.644199  499526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1102 14:16:58.663340  499526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1102 14:16:58.682337  499526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1102 14:16:58.700805  499526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/newest-cni-352233/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1102 14:16:58.724818  499526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/newest-cni-352233/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1102 14:16:58.747485  499526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/newest-cni-352233/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1102 14:16:58.766729  499526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/newest-cni-352233/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1102 14:16:58.787322  499526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem --> /usr/share/ca-certificates/2951742.pem (1708 bytes)
	I1102 14:16:58.808329  499526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 14:16:58.829051  499526 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem --> /usr/share/ca-certificates/295174.pem (1338 bytes)
	I1102 14:16:58.852596  499526 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1102 14:16:58.867643  499526 ssh_runner.go:195] Run: openssl version
	I1102 14:16:58.874892  499526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295174.pem && ln -fs /usr/share/ca-certificates/295174.pem /etc/ssl/certs/295174.pem"
	I1102 14:16:58.884391  499526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295174.pem
	I1102 14:16:58.888800  499526 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 13:20 /usr/share/ca-certificates/295174.pem
	I1102 14:16:58.888883  499526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295174.pem
	I1102 14:16:58.959994  499526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295174.pem /etc/ssl/certs/51391683.0"
	I1102 14:16:58.969218  499526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951742.pem && ln -fs /usr/share/ca-certificates/2951742.pem /etc/ssl/certs/2951742.pem"
	I1102 14:16:58.979764  499526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951742.pem
	I1102 14:16:58.984759  499526 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 13:20 /usr/share/ca-certificates/2951742.pem
	I1102 14:16:58.984824  499526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951742.pem
	I1102 14:16:59.027075  499526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951742.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 14:16:59.035377  499526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 14:16:59.044179  499526 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:16:59.048269  499526 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 13:13 /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:16:59.048343  499526 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:16:59.090149  499526 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 14:16:59.098482  499526 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1102 14:16:59.102100  499526 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1102 14:16:59.102193  499526 kubeadm.go:401] StartCluster: {Name:newest-cni-352233 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-352233 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 14:16:59.102282  499526 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1102 14:16:59.102339  499526 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1102 14:16:59.128787  499526 cri.go:89] found id: ""
	I1102 14:16:59.128876  499526 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1102 14:16:59.136740  499526 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1102 14:16:59.145544  499526 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1102 14:16:59.145638  499526 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1102 14:16:59.154901  499526 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1102 14:16:59.154918  499526 kubeadm.go:158] found existing configuration files:
	
	I1102 14:16:59.154971  499526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1102 14:16:59.164440  499526 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1102 14:16:59.164505  499526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1102 14:16:59.172210  499526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1102 14:16:59.180123  499526 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1102 14:16:59.180248  499526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1102 14:16:59.188037  499526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1102 14:16:59.195861  499526 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1102 14:16:59.195973  499526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1102 14:16:59.203320  499526 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1102 14:16:59.211155  499526 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1102 14:16:59.211222  499526 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1102 14:16:59.218658  499526 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1102 14:16:59.264668  499526 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1102 14:16:59.265088  499526 kubeadm.go:319] [preflight] Running pre-flight checks
	I1102 14:16:59.290574  499526 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1102 14:16:59.290758  499526 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1102 14:16:59.290858  499526 kubeadm.go:319] OS: Linux
	I1102 14:16:59.290938  499526 kubeadm.go:319] CGROUPS_CPU: enabled
	I1102 14:16:59.291019  499526 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1102 14:16:59.291102  499526 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1102 14:16:59.291172  499526 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1102 14:16:59.291254  499526 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1102 14:16:59.291332  499526 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1102 14:16:59.291423  499526 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1102 14:16:59.291501  499526 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1102 14:16:59.291582  499526 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1102 14:16:59.385411  499526 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1102 14:16:59.385585  499526 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1102 14:16:59.385712  499526 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1102 14:16:59.395003  499526 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1102 14:16:59.400001  499526 out.go:252]   - Generating certificates and keys ...
	I1102 14:16:59.400198  499526 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1102 14:16:59.400282  499526 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1102 14:16:59.583461  499526 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1102 14:16:59.886585  499526 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	W1102 14:16:58.451301  496485 pod_ready.go:104] pod "coredns-66bc5c9577-lwp97" is not "Ready", error: <nil>
	W1102 14:17:00.946074  496485 pod_ready.go:104] pod "coredns-66bc5c9577-lwp97" is not "Ready", error: <nil>
	I1102 14:17:01.704860  499526 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1102 14:17:02.327479  499526 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1102 14:17:03.136404  499526 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1102 14:17:03.136804  499526 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-352233] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1102 14:17:03.550070  499526 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1102 14:17:03.550751  499526 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-352233] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1102 14:17:03.854381  499526 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1102 14:17:04.548098  499526 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1102 14:17:05.224562  499526 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1102 14:17:05.224843  499526 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	W1102 14:17:02.947256  496485 pod_ready.go:104] pod "coredns-66bc5c9577-lwp97" is not "Ready", error: <nil>
	W1102 14:17:05.446641  496485 pod_ready.go:104] pod "coredns-66bc5c9577-lwp97" is not "Ready", error: <nil>
	I1102 14:17:06.127291  499526 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1102 14:17:06.766870  499526 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1102 14:17:06.912608  499526 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1102 14:17:08.362180  499526 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1102 14:17:08.900879  499526 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1102 14:17:08.900981  499526 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1102 14:17:08.901059  499526 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1102 14:17:08.905177  499526 out.go:252]   - Booting up control plane ...
	I1102 14:17:08.905295  499526 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1102 14:17:08.905388  499526 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1102 14:17:08.905479  499526 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1102 14:17:08.934292  499526 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1102 14:17:08.934434  499526 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1102 14:17:08.942552  499526 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1102 14:17:08.943132  499526 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1102 14:17:08.943187  499526 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1102 14:17:09.083078  499526 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1102 14:17:09.083207  499526 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1102 14:17:07.945907  496485 pod_ready.go:104] pod "coredns-66bc5c9577-lwp97" is not "Ready", error: <nil>
	W1102 14:17:10.446100  496485 pod_ready.go:104] pod "coredns-66bc5c9577-lwp97" is not "Ready", error: <nil>
	I1102 14:17:11.083042  499526 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.001756916s
	I1102 14:17:11.086408  499526 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1102 14:17:11.086527  499526 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1102 14:17:11.086650  499526 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1102 14:17:11.086740  499526 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1102 14:17:12.944769  496485 pod_ready.go:104] pod "coredns-66bc5c9577-lwp97" is not "Ready", error: <nil>
	W1102 14:17:15.445493  496485 pod_ready.go:104] pod "coredns-66bc5c9577-lwp97" is not "Ready", error: <nil>
	I1102 14:17:16.445050  496485 pod_ready.go:94] pod "coredns-66bc5c9577-lwp97" is "Ready"
	I1102 14:17:16.445076  496485 pod_ready.go:86] duration metric: took 34.005875526s for pod "coredns-66bc5c9577-lwp97" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:17:16.451667  496485 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-786183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:17:16.457315  496485 pod_ready.go:94] pod "etcd-default-k8s-diff-port-786183" is "Ready"
	I1102 14:17:16.457341  496485 pod_ready.go:86] duration metric: took 5.650628ms for pod "etcd-default-k8s-diff-port-786183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:17:16.459747  496485 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-786183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:17:16.466536  496485 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-786183" is "Ready"
	I1102 14:17:16.466559  496485 pod_ready.go:86] duration metric: took 6.739197ms for pod "kube-apiserver-default-k8s-diff-port-786183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:17:16.469057  496485 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-786183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:17:16.643357  496485 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-786183" is "Ready"
	I1102 14:17:16.643438  496485 pod_ready.go:86] duration metric: took 174.359196ms for pod "kube-controller-manager-default-k8s-diff-port-786183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:17:16.842298  496485 pod_ready.go:83] waiting for pod "kube-proxy-jlf8q" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:17:16.498726  499526 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.412280965s
	I1102 14:17:17.000996  499526 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.913907393s
	I1102 14:17:17.590549  499526 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.504081748s
	I1102 14:17:17.611851  499526 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1102 14:17:17.624762  499526 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1102 14:17:17.644630  499526 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1102 14:17:17.644843  499526 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-352233 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1102 14:17:17.658149  499526 kubeadm.go:319] [bootstrap-token] Using token: o0c384.bmibewifw3syocrl
	I1102 14:17:17.242384  496485 pod_ready.go:94] pod "kube-proxy-jlf8q" is "Ready"
	I1102 14:17:17.242425  496485 pod_ready.go:86] duration metric: took 400.045489ms for pod "kube-proxy-jlf8q" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:17:17.442840  496485 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-786183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:17:17.842483  496485 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-786183" is "Ready"
	I1102 14:17:17.842514  496485 pod_ready.go:86] duration metric: took 399.647743ms for pod "kube-scheduler-default-k8s-diff-port-786183" in "kube-system" namespace to be "Ready" or be gone ...
	I1102 14:17:17.842527  496485 pod_ready.go:40] duration metric: took 35.407197793s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1102 14:17:17.907130  496485 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1102 14:17:17.917769  496485 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-786183" cluster and "default" namespace by default
	I1102 14:17:17.661111  499526 out.go:252]   - Configuring RBAC rules ...
	I1102 14:17:17.661253  499526 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1102 14:17:17.669009  499526 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1102 14:17:17.694287  499526 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1102 14:17:17.704167  499526 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1102 14:17:17.716653  499526 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1102 14:17:17.722783  499526 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1102 14:17:17.998328  499526 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1102 14:17:18.568973  499526 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1102 14:17:19.000067  499526 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1102 14:17:19.001323  499526 kubeadm.go:319] 
	I1102 14:17:19.001399  499526 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1102 14:17:19.001405  499526 kubeadm.go:319] 
	I1102 14:17:19.001486  499526 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1102 14:17:19.001491  499526 kubeadm.go:319] 
	I1102 14:17:19.001517  499526 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1102 14:17:19.001580  499526 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1102 14:17:19.001632  499526 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1102 14:17:19.001637  499526 kubeadm.go:319] 
	I1102 14:17:19.001694  499526 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1102 14:17:19.001698  499526 kubeadm.go:319] 
	I1102 14:17:19.001748  499526 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1102 14:17:19.001771  499526 kubeadm.go:319] 
	I1102 14:17:19.001826  499526 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1102 14:17:19.001905  499526 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1102 14:17:19.001977  499526 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1102 14:17:19.001981  499526 kubeadm.go:319] 
	I1102 14:17:19.002069  499526 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1102 14:17:19.002149  499526 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1102 14:17:19.002154  499526 kubeadm.go:319] 
	I1102 14:17:19.002242  499526 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token o0c384.bmibewifw3syocrl \
	I1102 14:17:19.002350  499526 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:bd4a1f3bddc85f3fc83315ad33165a30aa1cba7ce55898ef9dad8dcc7e8d0eec \
	I1102 14:17:19.002372  499526 kubeadm.go:319] 	--control-plane 
	I1102 14:17:19.002376  499526 kubeadm.go:319] 
	I1102 14:17:19.002474  499526 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1102 14:17:19.002479  499526 kubeadm.go:319] 
	I1102 14:17:19.002565  499526 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token o0c384.bmibewifw3syocrl \
	I1102 14:17:19.002700  499526 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:bd4a1f3bddc85f3fc83315ad33165a30aa1cba7ce55898ef9dad8dcc7e8d0eec 
	I1102 14:17:19.009520  499526 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1102 14:17:19.009793  499526 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1102 14:17:19.009936  499526 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1102 14:17:19.009948  499526 cni.go:84] Creating CNI manager for ""
	I1102 14:17:19.009956  499526 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 14:17:19.013099  499526 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1102 14:17:19.015570  499526 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1102 14:17:19.019762  499526 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1102 14:17:19.019784  499526 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1102 14:17:19.034185  499526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1102 14:17:19.370932  499526 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1102 14:17:19.371070  499526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:17:19.371140  499526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-352233 minikube.k8s.io/updated_at=2025_11_02T14_17_19_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a minikube.k8s.io/name=newest-cni-352233 minikube.k8s.io/primary=true
	I1102 14:17:19.527552  499526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:17:19.527630  499526 ops.go:34] apiserver oom_adj: -16
	I1102 14:17:20.027772  499526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:17:20.528437  499526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:17:21.028610  499526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:17:21.528492  499526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:17:22.027755  499526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:17:22.527589  499526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:17:23.028160  499526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:17:23.528328  499526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:17:24.027663  499526 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1102 14:17:24.189724  499526 kubeadm.go:1114] duration metric: took 4.818697661s to wait for elevateKubeSystemPrivileges
	I1102 14:17:24.189750  499526 kubeadm.go:403] duration metric: took 25.087560127s to StartCluster
	I1102 14:17:24.189768  499526 settings.go:142] acquiring lock: {Name:mk95f66b3b15e63f58f8c9085c1ffe67cc396dc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:17:24.189830  499526 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21808-293314/kubeconfig
	I1102 14:17:24.190785  499526 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/kubeconfig: {Name:mke5a65554da8fc0fd6a2ea60bed899d5b38ce09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:17:24.191027  499526 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1102 14:17:24.191034  499526 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 14:17:24.191302  499526 config.go:182] Loaded profile config "newest-cni-352233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:17:24.191344  499526 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1102 14:17:24.191401  499526 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-352233"
	I1102 14:17:24.191416  499526 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-352233"
	I1102 14:17:24.191440  499526 host.go:66] Checking if "newest-cni-352233" exists ...
	I1102 14:17:24.191889  499526 cli_runner.go:164] Run: docker container inspect newest-cni-352233 --format={{.State.Status}}
	I1102 14:17:24.192300  499526 addons.go:70] Setting default-storageclass=true in profile "newest-cni-352233"
	I1102 14:17:24.192326  499526 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-352233"
	I1102 14:17:24.192592  499526 cli_runner.go:164] Run: docker container inspect newest-cni-352233 --format={{.State.Status}}
	I1102 14:17:24.194564  499526 out.go:179] * Verifying Kubernetes components...
	I1102 14:17:24.197543  499526 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 14:17:24.233635  499526 addons.go:239] Setting addon default-storageclass=true in "newest-cni-352233"
	I1102 14:17:24.233678  499526 host.go:66] Checking if "newest-cni-352233" exists ...
	I1102 14:17:24.234096  499526 cli_runner.go:164] Run: docker container inspect newest-cni-352233 --format={{.State.Status}}
	I1102 14:17:24.243717  499526 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1102 14:17:24.246579  499526 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 14:17:24.246604  499526 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1102 14:17:24.246691  499526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-352233
	I1102 14:17:24.259978  499526 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1102 14:17:24.260001  499526 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1102 14:17:24.260065  499526 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-352233
	I1102 14:17:24.306175  499526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/newest-cni-352233/id_rsa Username:docker}
	I1102 14:17:24.308703  499526 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33461 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/newest-cni-352233/id_rsa Username:docker}
	I1102 14:17:24.487359  499526 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1102 14:17:24.496748  499526 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1102 14:17:24.595293  499526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1102 14:17:24.595656  499526 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1102 14:17:24.850075  499526 start.go:1013] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1102 14:17:24.851060  499526 api_server.go:52] waiting for apiserver process to appear ...
	I1102 14:17:24.851236  499526 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 14:17:25.256856  499526 api_server.go:72] duration metric: took 1.065795515s to wait for apiserver process to appear ...
	I1102 14:17:25.256934  499526 api_server.go:88] waiting for apiserver healthz status ...
	I1102 14:17:25.256965  499526 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1102 14:17:25.271772  499526 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1102 14:17:25.273106  499526 api_server.go:141] control plane version: v1.34.1
	I1102 14:17:25.273137  499526 api_server.go:131] duration metric: took 16.183695ms to wait for apiserver health ...
	I1102 14:17:25.273147  499526 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 14:17:25.275442  499526 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1102 14:17:25.276649  499526 system_pods.go:59] 8 kube-system pods found
	I1102 14:17:25.276739  499526 system_pods.go:61] "coredns-66bc5c9577-g4hfq" [249838ab-df11-4a0c-a2ef-a1b05a0e2660] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1102 14:17:25.276766  499526 system_pods.go:61] "etcd-newest-cni-352233" [d797a2fa-d3f0-4180-88c1-5417b262b322] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 14:17:25.276808  499526 system_pods.go:61] "kindnet-g4hrl" [380d63bc-7a9c-4abb-9747-04c37075e8b0] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1102 14:17:25.276841  499526 system_pods.go:61] "kube-apiserver-newest-cni-352233" [ea69c744-43cb-4464-9da8-0768bd8820b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 14:17:25.276866  499526 system_pods.go:61] "kube-controller-manager-newest-cni-352233" [5e48eb79-6be2-4f01-99bc-be7c2f15d45a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 14:17:25.276895  499526 system_pods.go:61] "kube-proxy-vbc2x" [2cec75f2-36fd-49c7-8644-941b68023b1b] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1102 14:17:25.276928  499526 system_pods.go:61] "kube-scheduler-newest-cni-352233" [33bacc9d-8a05-403d-865f-ba237a6aa780] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 14:17:25.276956  499526 system_pods.go:61] "storage-provisioner" [c94e5e11-33b4-4d32-9bbc-fa8e510911a5] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1102 14:17:25.276980  499526 system_pods.go:74] duration metric: took 3.826543ms to wait for pod list to return data ...
	I1102 14:17:25.277002  499526 default_sa.go:34] waiting for default service account to be created ...
	I1102 14:17:25.279206  499526 addons.go:515] duration metric: took 1.087830619s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1102 14:17:25.280078  499526 default_sa.go:45] found service account: "default"
	I1102 14:17:25.280103  499526 default_sa.go:55] duration metric: took 3.066067ms for default service account to be created ...
	I1102 14:17:25.280116  499526 kubeadm.go:587] duration metric: took 1.089059219s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1102 14:17:25.280135  499526 node_conditions.go:102] verifying NodePressure condition ...
	I1102 14:17:25.283106  499526 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1102 14:17:25.283140  499526 node_conditions.go:123] node cpu capacity is 2
	I1102 14:17:25.283153  499526 node_conditions.go:105] duration metric: took 3.003297ms to run NodePressure ...
	I1102 14:17:25.283166  499526 start.go:242] waiting for startup goroutines ...
	I1102 14:17:25.354752  499526 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-352233" context rescaled to 1 replicas
	I1102 14:17:25.354789  499526 start.go:247] waiting for cluster config update ...
	I1102 14:17:25.354802  499526 start.go:256] writing updated cluster config ...
	I1102 14:17:25.355132  499526 ssh_runner.go:195] Run: rm -f paused
	I1102 14:17:25.425876  499526 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1102 14:17:25.429314  499526 out.go:179] * Done! kubectl is now configured to use "newest-cni-352233" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 02 14:17:24 newest-cni-352233 crio[873]: time="2025-11-02T14:17:24.878478081Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:17:24 newest-cni-352233 crio[873]: time="2025-11-02T14:17:24.884866678Z" level=info msg="Running pod sandbox: kube-system/kindnet-g4hrl/POD" id=1c54b815-3339-4019-a058-0172c90614db name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 14:17:24 newest-cni-352233 crio[873]: time="2025-11-02T14:17:24.884961735Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:17:24 newest-cni-352233 crio[873]: time="2025-11-02T14:17:24.893508336Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=1c54b815-3339-4019-a058-0172c90614db name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 14:17:24 newest-cni-352233 crio[873]: time="2025-11-02T14:17:24.895795934Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=a71b53f8-2058-4ec0-a8a8-5ce3e7108833 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 14:17:24 newest-cni-352233 crio[873]: time="2025-11-02T14:17:24.911293386Z" level=info msg="Ran pod sandbox 6fe2204a100194c151b917bbab297b6d69bac08ecc4777420f3df8162b207d22 with infra container: kube-system/kindnet-g4hrl/POD" id=1c54b815-3339-4019-a058-0172c90614db name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 14:17:24 newest-cni-352233 crio[873]: time="2025-11-02T14:17:24.913315602Z" level=info msg="Ran pod sandbox e40dd53b462b51b60401a42998fc0fb17f7346a5f6a209940129a9f7620ad152 with infra container: kube-system/kube-proxy-vbc2x/POD" id=a71b53f8-2058-4ec0-a8a8-5ce3e7108833 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 14:17:24 newest-cni-352233 crio[873]: time="2025-11-02T14:17:24.919273129Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=df283260-4994-4dcc-9ade-2e7579f26bf9 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:17:24 newest-cni-352233 crio[873]: time="2025-11-02T14:17:24.919294397Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=88d28f07-41ae-4fba-b67d-88fdab7d955a name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:17:24 newest-cni-352233 crio[873]: time="2025-11-02T14:17:24.921418038Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=74edf8a5-d9e5-449f-98fb-61594f54a9b9 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:17:24 newest-cni-352233 crio[873]: time="2025-11-02T14:17:24.921792844Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=da843b86-aea4-481b-ad45-341c772d2567 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:17:24 newest-cni-352233 crio[873]: time="2025-11-02T14:17:24.931994215Z" level=info msg="Creating container: kube-system/kindnet-g4hrl/kindnet-cni" id=cb71aead-eb87-458f-9e0f-7259d62d779d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:17:24 newest-cni-352233 crio[873]: time="2025-11-02T14:17:24.932099611Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:17:24 newest-cni-352233 crio[873]: time="2025-11-02T14:17:24.936238921Z" level=info msg="Creating container: kube-system/kube-proxy-vbc2x/kube-proxy" id=9f2b338b-7465-46d9-9fc7-0a4965e0aec0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:17:24 newest-cni-352233 crio[873]: time="2025-11-02T14:17:24.936358118Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:17:24 newest-cni-352233 crio[873]: time="2025-11-02T14:17:24.941130903Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:17:24 newest-cni-352233 crio[873]: time="2025-11-02T14:17:24.941673858Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:17:24 newest-cni-352233 crio[873]: time="2025-11-02T14:17:24.944203903Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:17:24 newest-cni-352233 crio[873]: time="2025-11-02T14:17:24.947033373Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:17:24 newest-cni-352233 crio[873]: time="2025-11-02T14:17:24.99615317Z" level=info msg="Created container 454552624b963d3127704e54e186243a0ae138e1f3b6184b356c351dc3f6fea2: kube-system/kindnet-g4hrl/kindnet-cni" id=cb71aead-eb87-458f-9e0f-7259d62d779d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:17:24 newest-cni-352233 crio[873]: time="2025-11-02T14:17:24.997044059Z" level=info msg="Starting container: 454552624b963d3127704e54e186243a0ae138e1f3b6184b356c351dc3f6fea2" id=90b02c87-1a5c-4b58-979d-0f71f4716f44 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 14:17:25 newest-cni-352233 crio[873]: time="2025-11-02T14:17:25.006412379Z" level=info msg="Created container f0c805e6a36d43f4a53476981795f07150ea566b3b4f1ad713c83207a7853cd0: kube-system/kube-proxy-vbc2x/kube-proxy" id=9f2b338b-7465-46d9-9fc7-0a4965e0aec0 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:17:25 newest-cni-352233 crio[873]: time="2025-11-02T14:17:25.007230677Z" level=info msg="Starting container: f0c805e6a36d43f4a53476981795f07150ea566b3b4f1ad713c83207a7853cd0" id=94200c9a-f680-4802-ad5c-cbc6bd66c731 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 14:17:25 newest-cni-352233 crio[873]: time="2025-11-02T14:17:25.007364668Z" level=info msg="Started container" PID=1556 containerID=454552624b963d3127704e54e186243a0ae138e1f3b6184b356c351dc3f6fea2 description=kube-system/kindnet-g4hrl/kindnet-cni id=90b02c87-1a5c-4b58-979d-0f71f4716f44 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6fe2204a100194c151b917bbab297b6d69bac08ecc4777420f3df8162b207d22
	Nov 02 14:17:25 newest-cni-352233 crio[873]: time="2025-11-02T14:17:25.030496187Z" level=info msg="Started container" PID=1555 containerID=f0c805e6a36d43f4a53476981795f07150ea566b3b4f1ad713c83207a7853cd0 description=kube-system/kube-proxy-vbc2x/kube-proxy id=94200c9a-f680-4802-ad5c-cbc6bd66c731 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e40dd53b462b51b60401a42998fc0fb17f7346a5f6a209940129a9f7620ad152
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	f0c805e6a36d4       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   1 second ago        Running             kube-proxy                0                   e40dd53b462b5       kube-proxy-vbc2x                            kube-system
	454552624b963       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   1 second ago        Running             kindnet-cni               0                   6fe2204a10019       kindnet-g4hrl                               kube-system
	459cc6aca7630       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   15 seconds ago      Running             kube-scheduler            0                   dce5e80201af3       kube-scheduler-newest-cni-352233            kube-system
	0fecb05c34611       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   15 seconds ago      Running             kube-apiserver            0                   4043b8bc72b19       kube-apiserver-newest-cni-352233            kube-system
	6c951536c8cba       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   15 seconds ago      Running             etcd                      0                   ee6da9d93eec3       etcd-newest-cni-352233                      kube-system
	adf79b3ba1037       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   15 seconds ago      Running             kube-controller-manager   0                   d76fb08bf3fdb       kube-controller-manager-newest-cni-352233   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-352233
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-352233
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a
	                    minikube.k8s.io/name=newest-cni-352233
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_02T14_17_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 02 Nov 2025 14:17:15 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-352233
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 02 Nov 2025 14:17:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 02 Nov 2025 14:17:18 +0000   Sun, 02 Nov 2025 14:17:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 02 Nov 2025 14:17:18 +0000   Sun, 02 Nov 2025 14:17:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 02 Nov 2025 14:17:18 +0000   Sun, 02 Nov 2025 14:17:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 02 Nov 2025 14:17:18 +0000   Sun, 02 Nov 2025 14:17:11 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-352233
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                73c25d57-fdc2-428f-850a-0ced46336189
	  Boot ID:                    4c303cf4-c0b6-4c0d-aa1a-d7509feb15e7
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-352233                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8s
	  kube-system                 kindnet-g4hrl                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-352233             250m (12%)    0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-352233    200m (10%)    0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-vbc2x                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-352233             100m (5%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 1s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  15s (x8 over 16s)  kubelet          Node newest-cni-352233 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15s (x8 over 16s)  kubelet          Node newest-cni-352233 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15s (x8 over 16s)  kubelet          Node newest-cni-352233 status is now: NodeHasSufficientPID
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 8s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8s                 kubelet          Node newest-cni-352233 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s                 kubelet          Node newest-cni-352233 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s                 kubelet          Node newest-cni-352233 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4s                 node-controller  Node newest-cni-352233 event: Registered Node newest-cni-352233 in Controller
	
	
	==> dmesg <==
	[  +3.515963] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:57] overlayfs: idmapped layers are currently not supported
	[ +24.836033] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:58] overlayfs: idmapped layers are currently not supported
	[ +23.362553] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:59] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:01] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:02] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:03] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:04] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:06] overlayfs: idmapped layers are currently not supported
	[ +50.469589] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 2 14:07] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:08] overlayfs: idmapped layers are currently not supported
	[ +11.089512] overlayfs: idmapped layers are currently not supported
	[ +33.821233] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:09] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:10] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:11] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:13] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:14] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:15] overlayfs: idmapped layers are currently not supported
	[ +29.099512] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:16] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:17] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6c951536c8cba9e6eaa9eb03e22c962616d309c31dc2fb277256c2bc1155d8f4] <==
	{"level":"warn","ts":"2025-11-02T14:17:13.303031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:13.329534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:13.341592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:13.361729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:13.381382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:13.402170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:13.427428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:13.437771Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:13.474072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:13.512515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:13.536298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:13.552038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:13.575344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:13.621704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:13.643629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:13.646199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:13.657031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:13.674823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:13.697502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:13.732000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:13.765776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:13.795964Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:13.830942Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:13.849148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:13.967986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37692","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:17:27 up  2:59,  0 user,  load average: 3.63, 3.54, 3.04
	Linux newest-cni-352233 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [454552624b963d3127704e54e186243a0ae138e1f3b6184b356c351dc3f6fea2] <==
	I1102 14:17:25.114591       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1102 14:17:25.114940       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1102 14:17:25.115067       1 main.go:148] setting mtu 1500 for CNI 
	I1102 14:17:25.115080       1 main.go:178] kindnetd IP family: "ipv4"
	I1102 14:17:25.115093       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-02T14:17:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1102 14:17:25.319564       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1102 14:17:25.319593       1 controller.go:381] "Waiting for informer caches to sync"
	I1102 14:17:25.319602       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1102 14:17:25.320271       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [0fecb05c3461127a70fc8b52505a64cc92aa0183e4f389d50e6577436701a67a] <==
	E1102 14:17:15.442235       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1102 14:17:15.448767       1 controller.go:667] quota admission added evaluator for: namespaces
	I1102 14:17:15.464992       1 cache.go:39] Caches are synced for autoregister controller
	I1102 14:17:15.488246       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 14:17:15.488370       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1102 14:17:15.505847       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 14:17:15.512709       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1102 14:17:15.665017       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1102 14:17:15.769629       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1102 14:17:15.810666       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1102 14:17:15.810768       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1102 14:17:17.197517       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1102 14:17:17.252877       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1102 14:17:17.379036       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1102 14:17:17.386438       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1102 14:17:17.387612       1 controller.go:667] quota admission added evaluator for: endpoints
	I1102 14:17:17.393085       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1102 14:17:18.106885       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1102 14:17:18.528712       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1102 14:17:18.566979       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1102 14:17:18.584365       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1102 14:17:23.894368       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 14:17:23.899309       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1102 14:17:23.939630       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1102 14:17:24.172308       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [adf79b3ba103798d0e1bfe6cd317a57eb3b8923831c1f7b051965b61495fd44b] <==
	I1102 14:17:22.936043       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1102 14:17:22.936602       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1102 14:17:22.938692       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1102 14:17:22.940694       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 14:17:22.941751       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1102 14:17:22.941759       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1102 14:17:22.943955       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 14:17:22.950108       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1102 14:17:22.961277       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1102 14:17:22.961287       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1102 14:17:22.961427       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1102 14:17:22.961317       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1102 14:17:22.961480       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1102 14:17:22.961537       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1102 14:17:22.961573       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1102 14:17:22.961306       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1102 14:17:22.963514       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1102 14:17:22.970979       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-352233" podCIDRs=["10.42.0.0/24"]
	I1102 14:17:22.975276       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1102 14:17:22.981914       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1102 14:17:23.062758       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1102 14:17:23.132567       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 14:17:23.132597       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1102 14:17:23.132606       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1102 14:17:23.163875       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [f0c805e6a36d43f4a53476981795f07150ea566b3b4f1ad713c83207a7853cd0] <==
	I1102 14:17:25.079037       1 server_linux.go:53] "Using iptables proxy"
	I1102 14:17:25.160454       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1102 14:17:25.263146       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1102 14:17:25.265209       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1102 14:17:25.267366       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1102 14:17:25.300126       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 14:17:25.300245       1 server_linux.go:132] "Using iptables Proxier"
	I1102 14:17:25.304370       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1102 14:17:25.304814       1 server.go:527] "Version info" version="v1.34.1"
	I1102 14:17:25.304870       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 14:17:25.308372       1 config.go:106] "Starting endpoint slice config controller"
	I1102 14:17:25.308446       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1102 14:17:25.308857       1 config.go:200] "Starting service config controller"
	I1102 14:17:25.308903       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1102 14:17:25.309279       1 config.go:403] "Starting serviceCIDR config controller"
	I1102 14:17:25.309339       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1102 14:17:25.309894       1 config.go:309] "Starting node config controller"
	I1102 14:17:25.309952       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1102 14:17:25.309982       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1102 14:17:25.408612       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1102 14:17:25.409863       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1102 14:17:25.409920       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [459cc6aca763069be0c95813debfe673031eb4e5842ddf620b7b2566b2f65501] <==
	I1102 14:17:16.483271       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 14:17:16.486005       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1102 14:17:16.486247       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 14:17:16.486293       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 14:17:16.486356       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1102 14:17:16.492382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1102 14:17:16.492548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1102 14:17:16.499391       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1102 14:17:16.499544       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1102 14:17:16.499674       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1102 14:17:16.499778       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1102 14:17:16.499794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1102 14:17:16.499853       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1102 14:17:16.499925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1102 14:17:16.499933       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1102 14:17:16.499975       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1102 14:17:16.500011       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1102 14:17:16.500048       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1102 14:17:16.500141       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1102 14:17:16.508694       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1102 14:17:16.508796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1102 14:17:16.508904       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1102 14:17:16.509053       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1102 14:17:16.513072       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1102 14:17:17.787046       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 02 14:17:19 newest-cni-352233 kubelet[1354]: E1102 14:17:19.784112    1354 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-352233\" already exists" pod="kube-system/etcd-newest-cni-352233"
	Nov 02 14:17:19 newest-cni-352233 kubelet[1354]: I1102 14:17:19.805906    1354 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-352233" podStartSLOduration=1.805886368 podStartE2EDuration="1.805886368s" podCreationTimestamp="2025-11-02 14:17:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 14:17:19.794787806 +0000 UTC m=+1.367276672" watchObservedRunningTime="2025-11-02 14:17:19.805886368 +0000 UTC m=+1.378375218"
	Nov 02 14:17:19 newest-cni-352233 kubelet[1354]: I1102 14:17:19.817965    1354 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-352233" podStartSLOduration=1.817945908 podStartE2EDuration="1.817945908s" podCreationTimestamp="2025-11-02 14:17:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 14:17:19.806501324 +0000 UTC m=+1.378990190" watchObservedRunningTime="2025-11-02 14:17:19.817945908 +0000 UTC m=+1.390434766"
	Nov 02 14:17:19 newest-cni-352233 kubelet[1354]: I1102 14:17:19.832214    1354 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-352233" podStartSLOduration=1.8321850240000002 podStartE2EDuration="1.832185024s" podCreationTimestamp="2025-11-02 14:17:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 14:17:19.818282872 +0000 UTC m=+1.390771738" watchObservedRunningTime="2025-11-02 14:17:19.832185024 +0000 UTC m=+1.404673874"
	Nov 02 14:17:23 newest-cni-352233 kubelet[1354]: I1102 14:17:23.024095    1354 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 02 14:17:23 newest-cni-352233 kubelet[1354]: I1102 14:17:23.024807    1354 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 02 14:17:23 newest-cni-352233 kubelet[1354]: I1102 14:17:23.968841    1354 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-352233" podStartSLOduration=5.968821989 podStartE2EDuration="5.968821989s" podCreationTimestamp="2025-11-02 14:17:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 14:17:19.838378098 +0000 UTC m=+1.410866973" watchObservedRunningTime="2025-11-02 14:17:23.968821989 +0000 UTC m=+5.541310839"
	Nov 02 14:17:23 newest-cni-352233 kubelet[1354]: I1102 14:17:23.997051    1354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2cec75f2-36fd-49c7-8644-941b68023b1b-kube-proxy\") pod \"kube-proxy-vbc2x\" (UID: \"2cec75f2-36fd-49c7-8644-941b68023b1b\") " pod="kube-system/kube-proxy-vbc2x"
	Nov 02 14:17:23 newest-cni-352233 kubelet[1354]: I1102 14:17:23.997333    1354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6kxr\" (UniqueName: \"kubernetes.io/projected/2cec75f2-36fd-49c7-8644-941b68023b1b-kube-api-access-n6kxr\") pod \"kube-proxy-vbc2x\" (UID: \"2cec75f2-36fd-49c7-8644-941b68023b1b\") " pod="kube-system/kube-proxy-vbc2x"
	Nov 02 14:17:23 newest-cni-352233 kubelet[1354]: I1102 14:17:23.997500    1354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/380d63bc-7a9c-4abb-9747-04c37075e8b0-xtables-lock\") pod \"kindnet-g4hrl\" (UID: \"380d63bc-7a9c-4abb-9747-04c37075e8b0\") " pod="kube-system/kindnet-g4hrl"
	Nov 02 14:17:23 newest-cni-352233 kubelet[1354]: I1102 14:17:23.997609    1354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2cec75f2-36fd-49c7-8644-941b68023b1b-lib-modules\") pod \"kube-proxy-vbc2x\" (UID: \"2cec75f2-36fd-49c7-8644-941b68023b1b\") " pod="kube-system/kube-proxy-vbc2x"
	Nov 02 14:17:23 newest-cni-352233 kubelet[1354]: I1102 14:17:23.997684    1354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/380d63bc-7a9c-4abb-9747-04c37075e8b0-cni-cfg\") pod \"kindnet-g4hrl\" (UID: \"380d63bc-7a9c-4abb-9747-04c37075e8b0\") " pod="kube-system/kindnet-g4hrl"
	Nov 02 14:17:23 newest-cni-352233 kubelet[1354]: I1102 14:17:23.997764    1354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2cec75f2-36fd-49c7-8644-941b68023b1b-xtables-lock\") pod \"kube-proxy-vbc2x\" (UID: \"2cec75f2-36fd-49c7-8644-941b68023b1b\") " pod="kube-system/kube-proxy-vbc2x"
	Nov 02 14:17:23 newest-cni-352233 kubelet[1354]: I1102 14:17:23.997869    1354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/380d63bc-7a9c-4abb-9747-04c37075e8b0-lib-modules\") pod \"kindnet-g4hrl\" (UID: \"380d63bc-7a9c-4abb-9747-04c37075e8b0\") " pod="kube-system/kindnet-g4hrl"
	Nov 02 14:17:23 newest-cni-352233 kubelet[1354]: I1102 14:17:23.997978    1354 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shkkq\" (UniqueName: \"kubernetes.io/projected/380d63bc-7a9c-4abb-9747-04c37075e8b0-kube-api-access-shkkq\") pod \"kindnet-g4hrl\" (UID: \"380d63bc-7a9c-4abb-9747-04c37075e8b0\") " pod="kube-system/kindnet-g4hrl"
	Nov 02 14:17:24 newest-cni-352233 kubelet[1354]: E1102 14:17:24.133510    1354 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 02 14:17:24 newest-cni-352233 kubelet[1354]: E1102 14:17:24.133565    1354 projected.go:196] Error preparing data for projected volume kube-api-access-n6kxr for pod kube-system/kube-proxy-vbc2x: configmap "kube-root-ca.crt" not found
	Nov 02 14:17:24 newest-cni-352233 kubelet[1354]: E1102 14:17:24.133659    1354 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2cec75f2-36fd-49c7-8644-941b68023b1b-kube-api-access-n6kxr podName:2cec75f2-36fd-49c7-8644-941b68023b1b nodeName:}" failed. No retries permitted until 2025-11-02 14:17:24.633632738 +0000 UTC m=+6.206121587 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-n6kxr" (UniqueName: "kubernetes.io/projected/2cec75f2-36fd-49c7-8644-941b68023b1b-kube-api-access-n6kxr") pod "kube-proxy-vbc2x" (UID: "2cec75f2-36fd-49c7-8644-941b68023b1b") : configmap "kube-root-ca.crt" not found
	Nov 02 14:17:24 newest-cni-352233 kubelet[1354]: E1102 14:17:24.147687    1354 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 02 14:17:24 newest-cni-352233 kubelet[1354]: E1102 14:17:24.147725    1354 projected.go:196] Error preparing data for projected volume kube-api-access-shkkq for pod kube-system/kindnet-g4hrl: configmap "kube-root-ca.crt" not found
	Nov 02 14:17:24 newest-cni-352233 kubelet[1354]: E1102 14:17:24.147790    1354 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/380d63bc-7a9c-4abb-9747-04c37075e8b0-kube-api-access-shkkq podName:380d63bc-7a9c-4abb-9747-04c37075e8b0 nodeName:}" failed. No retries permitted until 2025-11-02 14:17:24.647766327 +0000 UTC m=+6.220255177 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-shkkq" (UniqueName: "kubernetes.io/projected/380d63bc-7a9c-4abb-9747-04c37075e8b0-kube-api-access-shkkq") pod "kindnet-g4hrl" (UID: "380d63bc-7a9c-4abb-9747-04c37075e8b0") : configmap "kube-root-ca.crt" not found
	Nov 02 14:17:24 newest-cni-352233 kubelet[1354]: I1102 14:17:24.704588    1354 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 02 14:17:24 newest-cni-352233 kubelet[1354]: W1102 14:17:24.904979    1354 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3dedeeb54f374fa01ce72bc2b41c87fc231415be9be044642fbcda8c122ba0ff/crio-e40dd53b462b51b60401a42998fc0fb17f7346a5f6a209940129a9f7620ad152 WatchSource:0}: Error finding container e40dd53b462b51b60401a42998fc0fb17f7346a5f6a209940129a9f7620ad152: Status 404 returned error can't find the container with id e40dd53b462b51b60401a42998fc0fb17f7346a5f6a209940129a9f7620ad152
	Nov 02 14:17:25 newest-cni-352233 kubelet[1354]: I1102 14:17:25.818030    1354 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-g4hrl" podStartSLOduration=2.817994212 podStartE2EDuration="2.817994212s" podCreationTimestamp="2025-11-02 14:17:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 14:17:25.817913161 +0000 UTC m=+7.390402011" watchObservedRunningTime="2025-11-02 14:17:25.817994212 +0000 UTC m=+7.390483070"
	Nov 02 14:17:25 newest-cni-352233 kubelet[1354]: I1102 14:17:25.818408    1354 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vbc2x" podStartSLOduration=2.818396079 podStartE2EDuration="2.818396079s" podCreationTimestamp="2025-11-02 14:17:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-02 14:17:25.79453364 +0000 UTC m=+7.367022498" watchObservedRunningTime="2025-11-02 14:17:25.818396079 +0000 UTC m=+7.390884937"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-352233 -n newest-cni-352233
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-352233 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-g4hfq storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-352233 describe pod coredns-66bc5c9577-g4hfq storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-352233 describe pod coredns-66bc5c9577-g4hfq storage-provisioner: exit status 1 (81.895299ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-g4hfq" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-352233 describe pod coredns-66bc5c9577-g4hfq storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-786183 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p default-k8s-diff-port-786183 --alsologtostderr -v=1: exit status 80 (2.109504338s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-786183 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 14:17:30.164235  503365 out.go:360] Setting OutFile to fd 1 ...
	I1102 14:17:30.164513  503365 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:17:30.164524  503365 out.go:374] Setting ErrFile to fd 2...
	I1102 14:17:30.164529  503365 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:17:30.164849  503365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 14:17:30.165227  503365 out.go:368] Setting JSON to false
	I1102 14:17:30.165253  503365 mustload.go:66] Loading cluster: default-k8s-diff-port-786183
	I1102 14:17:30.165754  503365 config.go:182] Loaded profile config "default-k8s-diff-port-786183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:17:30.166395  503365 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-786183 --format={{.State.Status}}
	I1102 14:17:30.189852  503365 host.go:66] Checking if "default-k8s-diff-port-786183" exists ...
	I1102 14:17:30.190184  503365 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:17:30.323066  503365 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:true NGoroutines:64 SystemTime:2025-11-02 14:17:30.308035908 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:17:30.323739  503365 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-786183 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1102 14:17:30.330943  503365 out.go:179] * Pausing node default-k8s-diff-port-786183 ... 
	I1102 14:17:30.334209  503365 host.go:66] Checking if "default-k8s-diff-port-786183" exists ...
	I1102 14:17:30.334690  503365 ssh_runner.go:195] Run: systemctl --version
	I1102 14:17:30.334759  503365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-786183
	I1102 14:17:30.386191  503365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33456 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/default-k8s-diff-port-786183/id_rsa Username:docker}
	I1102 14:17:30.507656  503365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 14:17:30.554286  503365 pause.go:52] kubelet running: true
	I1102 14:17:30.554347  503365 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1102 14:17:30.910318  503365 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1102 14:17:30.910409  503365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1102 14:17:30.975936  503365 cri.go:89] found id: "ca0fa38afc419a430fa931e4b8bb43eec21c0571611ee490561556d416005aec"
	I1102 14:17:30.975961  503365 cri.go:89] found id: "769ecf1950a7818c9246e6454bf6040a545b55f56f2b3971721d96cd5fe6397a"
	I1102 14:17:30.975966  503365 cri.go:89] found id: "1bf6d913ed0ed0c57e76ab04a24da61ba22e4c320129f7d86e19d3853a33081b"
	I1102 14:17:30.975970  503365 cri.go:89] found id: "9fd9ef276f481fcadafbda761a8afadc807a2f1415cbc32fad53c6cf98b7595e"
	I1102 14:17:30.975974  503365 cri.go:89] found id: "4a42eeac35c46466a833f6d9dd2f5560d7c96f116a6b52ffdf9b0a58425abe0f"
	I1102 14:17:30.975977  503365 cri.go:89] found id: "f6bef86f73f59250e354d0fdd9e49760329ba2e76d5a2c9140645b949b671c4d"
	I1102 14:17:30.975980  503365 cri.go:89] found id: "6d9b69e73df509198b2e29494a4484507c8a14cccb6a2b6302b756a3c2183899"
	I1102 14:17:30.975983  503365 cri.go:89] found id: "d53a1eafeb3bc7e2100e0bcf284f029edbffd71be60582127cbabe95881a86ac"
	I1102 14:17:30.975986  503365 cri.go:89] found id: "312cee2bec817cdd2e35981ea4410dfbe7dc6c1e95635e12a5f8648c6f301ff1"
	I1102 14:17:30.975992  503365 cri.go:89] found id: "4a101d9dab2131f43fbd1503d716e8c8293362237381ea752445b7f6af83d5b9"
	I1102 14:17:30.976000  503365 cri.go:89] found id: "83f8cc1376a117417d77c447e6abdf61ce016b69c382db2babe916d2eb2ab76b"
	I1102 14:17:30.976003  503365 cri.go:89] found id: ""
	I1102 14:17:30.976051  503365 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 14:17:30.986527  503365 retry.go:31] will retry after 316.42321ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T14:17:30Z" level=error msg="open /run/runc: no such file or directory"
	I1102 14:17:31.303885  503365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 14:17:31.316957  503365 pause.go:52] kubelet running: false
	I1102 14:17:31.317019  503365 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1102 14:17:31.487904  503365 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1102 14:17:31.487980  503365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1102 14:17:31.565695  503365 cri.go:89] found id: "ca0fa38afc419a430fa931e4b8bb43eec21c0571611ee490561556d416005aec"
	I1102 14:17:31.565718  503365 cri.go:89] found id: "769ecf1950a7818c9246e6454bf6040a545b55f56f2b3971721d96cd5fe6397a"
	I1102 14:17:31.565723  503365 cri.go:89] found id: "1bf6d913ed0ed0c57e76ab04a24da61ba22e4c320129f7d86e19d3853a33081b"
	I1102 14:17:31.565727  503365 cri.go:89] found id: "9fd9ef276f481fcadafbda761a8afadc807a2f1415cbc32fad53c6cf98b7595e"
	I1102 14:17:31.565731  503365 cri.go:89] found id: "4a42eeac35c46466a833f6d9dd2f5560d7c96f116a6b52ffdf9b0a58425abe0f"
	I1102 14:17:31.565734  503365 cri.go:89] found id: "f6bef86f73f59250e354d0fdd9e49760329ba2e76d5a2c9140645b949b671c4d"
	I1102 14:17:31.565738  503365 cri.go:89] found id: "6d9b69e73df509198b2e29494a4484507c8a14cccb6a2b6302b756a3c2183899"
	I1102 14:17:31.565742  503365 cri.go:89] found id: "d53a1eafeb3bc7e2100e0bcf284f029edbffd71be60582127cbabe95881a86ac"
	I1102 14:17:31.565745  503365 cri.go:89] found id: "312cee2bec817cdd2e35981ea4410dfbe7dc6c1e95635e12a5f8648c6f301ff1"
	I1102 14:17:31.565754  503365 cri.go:89] found id: "4a101d9dab2131f43fbd1503d716e8c8293362237381ea752445b7f6af83d5b9"
	I1102 14:17:31.565758  503365 cri.go:89] found id: "83f8cc1376a117417d77c447e6abdf61ce016b69c382db2babe916d2eb2ab76b"
	I1102 14:17:31.565761  503365 cri.go:89] found id: ""
	I1102 14:17:31.565817  503365 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 14:17:31.576800  503365 retry.go:31] will retry after 206.866658ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T14:17:31Z" level=error msg="open /run/runc: no such file or directory"
	I1102 14:17:31.784295  503365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 14:17:31.803468  503365 pause.go:52] kubelet running: false
	I1102 14:17:31.803883  503365 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1102 14:17:32.045456  503365 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1102 14:17:32.045535  503365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1102 14:17:32.114037  503365 cri.go:89] found id: "ca0fa38afc419a430fa931e4b8bb43eec21c0571611ee490561556d416005aec"
	I1102 14:17:32.114059  503365 cri.go:89] found id: "769ecf1950a7818c9246e6454bf6040a545b55f56f2b3971721d96cd5fe6397a"
	I1102 14:17:32.114065  503365 cri.go:89] found id: "1bf6d913ed0ed0c57e76ab04a24da61ba22e4c320129f7d86e19d3853a33081b"
	I1102 14:17:32.114068  503365 cri.go:89] found id: "9fd9ef276f481fcadafbda761a8afadc807a2f1415cbc32fad53c6cf98b7595e"
	I1102 14:17:32.114072  503365 cri.go:89] found id: "4a42eeac35c46466a833f6d9dd2f5560d7c96f116a6b52ffdf9b0a58425abe0f"
	I1102 14:17:32.114076  503365 cri.go:89] found id: "f6bef86f73f59250e354d0fdd9e49760329ba2e76d5a2c9140645b949b671c4d"
	I1102 14:17:32.114079  503365 cri.go:89] found id: "6d9b69e73df509198b2e29494a4484507c8a14cccb6a2b6302b756a3c2183899"
	I1102 14:17:32.114093  503365 cri.go:89] found id: "d53a1eafeb3bc7e2100e0bcf284f029edbffd71be60582127cbabe95881a86ac"
	I1102 14:17:32.114097  503365 cri.go:89] found id: "312cee2bec817cdd2e35981ea4410dfbe7dc6c1e95635e12a5f8648c6f301ff1"
	I1102 14:17:32.114104  503365 cri.go:89] found id: "4a101d9dab2131f43fbd1503d716e8c8293362237381ea752445b7f6af83d5b9"
	I1102 14:17:32.114111  503365 cri.go:89] found id: "83f8cc1376a117417d77c447e6abdf61ce016b69c382db2babe916d2eb2ab76b"
	I1102 14:17:32.114114  503365 cri.go:89] found id: ""
	I1102 14:17:32.114165  503365 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 14:17:32.129274  503365 out.go:203] 
	W1102 14:17:32.132122  503365 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T14:17:32Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T14:17:32Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 14:17:32.132145  503365 out.go:285] * 
	* 
	W1102 14:17:32.139405  503365 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 14:17:32.142268  503365 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p default-k8s-diff-port-786183 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-786183
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-786183:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cf96e33bc393b6a3d6d55863f66411b90a67bf689beea00fe018be1f7c4b996e",
	        "Created": "2025-11-02T14:14:40.799097955Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 496615,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-02T14:16:22.150016569Z",
	            "FinishedAt": "2025-11-02T14:16:21.322466151Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/cf96e33bc393b6a3d6d55863f66411b90a67bf689beea00fe018be1f7c4b996e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cf96e33bc393b6a3d6d55863f66411b90a67bf689beea00fe018be1f7c4b996e/hostname",
	        "HostsPath": "/var/lib/docker/containers/cf96e33bc393b6a3d6d55863f66411b90a67bf689beea00fe018be1f7c4b996e/hosts",
	        "LogPath": "/var/lib/docker/containers/cf96e33bc393b6a3d6d55863f66411b90a67bf689beea00fe018be1f7c4b996e/cf96e33bc393b6a3d6d55863f66411b90a67bf689beea00fe018be1f7c4b996e-json.log",
	        "Name": "/default-k8s-diff-port-786183",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-786183:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-786183",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cf96e33bc393b6a3d6d55863f66411b90a67bf689beea00fe018be1f7c4b996e",
	                "LowerDir": "/var/lib/docker/overlay2/0cc4afb15e6b9077b7bd9ca2486b4ac42578c2a830d4d2dac09a5efd27fb8673-init/diff:/var/lib/docker/overlay2/53058afe6acd7391f639f551e07f446f6a39a991b699aea18507cb21a1652e97/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0cc4afb15e6b9077b7bd9ca2486b4ac42578c2a830d4d2dac09a5efd27fb8673/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0cc4afb15e6b9077b7bd9ca2486b4ac42578c2a830d4d2dac09a5efd27fb8673/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0cc4afb15e6b9077b7bd9ca2486b4ac42578c2a830d4d2dac09a5efd27fb8673/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-786183",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-786183/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-786183",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-786183",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-786183",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "11b00f83d16db4f0c0b86e906c4dec09edd6d70b7dfe467bcf0592ddad6aa96f",
	            "SandboxKey": "/var/run/docker/netns/11b00f83d16d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-786183": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:21:1c:bb:67:d6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "eb820b490718d17822d92cba10b61f1c2ec01866da1013536864f8ac5224c699",
	                    "EndpointID": "318ba17067f423805d27413ba08281f97485626f7e6dd1c198d1566d49927574",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-786183",
	                        "cf96e33bc393"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-786183 -n default-k8s-diff-port-786183
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-786183 -n default-k8s-diff-port-786183: exit status 2 (364.705048ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-786183 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-786183 logs -n 25: (1.290915492s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ no-preload-150469 image list --format=json                                                                                                                                                                                                    │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │ 02 Nov 25 14:14 UTC │
	│ pause   │ -p no-preload-150469 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │                     │
	│ delete  │ -p no-preload-150469                                                                                                                                                                                                                          │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │ 02 Nov 25 14:14 UTC │
	│ delete  │ -p no-preload-150469                                                                                                                                                                                                                          │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │ 02 Nov 25 14:14 UTC │
	│ delete  │ -p disable-driver-mounts-720030                                                                                                                                                                                                               │ disable-driver-mounts-720030 │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │ 02 Nov 25 14:14 UTC │
	│ start   │ -p default-k8s-diff-port-786183 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │ 02 Nov 25 14:15 UTC │
	│ addons  │ enable metrics-server -p embed-certs-955646 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:15 UTC │                     │
	│ stop    │ -p embed-certs-955646 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:15 UTC │ 02 Nov 25 14:15 UTC │
	│ addons  │ enable dashboard -p embed-certs-955646 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:15 UTC │ 02 Nov 25 14:15 UTC │
	│ start   │ -p embed-certs-955646 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:15 UTC │ 02 Nov 25 14:16 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-786183 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-786183 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │ 02 Nov 25 14:16 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-786183 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │ 02 Nov 25 14:16 UTC │
	│ start   │ -p default-k8s-diff-port-786183 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │ 02 Nov 25 14:17 UTC │
	│ image   │ embed-certs-955646 image list --format=json                                                                                                                                                                                                   │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │ 02 Nov 25 14:16 UTC │
	│ pause   │ -p embed-certs-955646 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │                     │
	│ delete  │ -p embed-certs-955646                                                                                                                                                                                                                         │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │ 02 Nov 25 14:16 UTC │
	│ delete  │ -p embed-certs-955646                                                                                                                                                                                                                         │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │ 02 Nov 25 14:16 UTC │
	│ start   │ -p newest-cni-352233 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-352233            │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │ 02 Nov 25 14:17 UTC │
	│ addons  │ enable metrics-server -p newest-cni-352233 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-352233            │ jenkins │ v1.37.0 │ 02 Nov 25 14:17 UTC │                     │
	│ stop    │ -p newest-cni-352233 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-352233            │ jenkins │ v1.37.0 │ 02 Nov 25 14:17 UTC │ 02 Nov 25 14:17 UTC │
	│ addons  │ enable dashboard -p newest-cni-352233 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-352233            │ jenkins │ v1.37.0 │ 02 Nov 25 14:17 UTC │ 02 Nov 25 14:17 UTC │
	│ start   │ -p newest-cni-352233 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-352233            │ jenkins │ v1.37.0 │ 02 Nov 25 14:17 UTC │                     │
	│ image   │ default-k8s-diff-port-786183 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:17 UTC │ 02 Nov 25 14:17 UTC │
	│ pause   │ -p default-k8s-diff-port-786183 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 14:17:29
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 14:17:29.539833  503184 out.go:360] Setting OutFile to fd 1 ...
	I1102 14:17:29.540010  503184 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:17:29.540070  503184 out.go:374] Setting ErrFile to fd 2...
	I1102 14:17:29.540079  503184 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:17:29.540467  503184 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 14:17:29.540939  503184 out.go:368] Setting JSON to false
	I1102 14:17:29.541872  503184 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10802,"bootTime":1762082248,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1102 14:17:29.541952  503184 start.go:143] virtualization:  
	I1102 14:17:29.546081  503184 out.go:179] * [newest-cni-352233] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1102 14:17:29.548553  503184 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 14:17:29.548760  503184 notify.go:221] Checking for updates...
	I1102 14:17:29.554236  503184 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 14:17:29.557305  503184 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-293314/kubeconfig
	I1102 14:17:29.561055  503184 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-293314/.minikube
	I1102 14:17:29.564062  503184 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1102 14:17:29.567037  503184 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 14:17:29.570343  503184 config.go:182] Loaded profile config "newest-cni-352233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:17:29.571020  503184 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 14:17:29.621681  503184 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1102 14:17:29.621796  503184 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:17:29.727988  503184 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-02 14:17:29.717591752 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:17:29.728092  503184 docker.go:319] overlay module found
	I1102 14:17:29.731064  503184 out.go:179] * Using the docker driver based on existing profile
	I1102 14:17:29.736126  503184 start.go:309] selected driver: docker
	I1102 14:17:29.736146  503184 start.go:930] validating driver "docker" against &{Name:newest-cni-352233 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-352233 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 14:17:29.736238  503184 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 14:17:29.736888  503184 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:17:29.814582  503184 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-02 14:17:29.802583381 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:17:29.814967  503184 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1102 14:17:29.814994  503184 cni.go:84] Creating CNI manager for ""
	I1102 14:17:29.815049  503184 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 14:17:29.815078  503184 start.go:353] cluster config:
	{Name:newest-cni-352233 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-352233 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 14:17:29.818309  503184 out.go:179] * Starting "newest-cni-352233" primary control-plane node in "newest-cni-352233" cluster
	I1102 14:17:29.821082  503184 cache.go:124] Beginning downloading kic base image for docker with crio
	I1102 14:17:29.823906  503184 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1102 14:17:29.826690  503184 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 14:17:29.826736  503184 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1102 14:17:29.826741  503184 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1102 14:17:29.826772  503184 cache.go:59] Caching tarball of preloaded images
	I1102 14:17:29.826858  503184 preload.go:233] Found /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1102 14:17:29.826867  503184 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1102 14:17:29.826995  503184 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/newest-cni-352233/config.json ...
	I1102 14:17:29.854097  503184 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1102 14:17:29.854138  503184 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1102 14:17:29.854157  503184 cache.go:233] Successfully downloaded all kic artifacts
	I1102 14:17:29.854184  503184 start.go:360] acquireMachinesLock for newest-cni-352233: {Name:mk656133c677274089939931d0ae5b5b59bd0afb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 14:17:29.854534  503184 start.go:364] duration metric: took 289.538µs to acquireMachinesLock for "newest-cni-352233"
	I1102 14:17:29.854975  503184 start.go:96] Skipping create...Using existing machine configuration
	I1102 14:17:29.854997  503184 fix.go:54] fixHost starting: 
	I1102 14:17:29.855345  503184 cli_runner.go:164] Run: docker container inspect newest-cni-352233 --format={{.State.Status}}
	I1102 14:17:29.878904  503184 fix.go:112] recreateIfNeeded on newest-cni-352233: state=Stopped err=<nil>
	W1102 14:17:29.878947  503184 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Nov 02 14:17:11 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:11.365108676Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=13f25ce4-40d6-4497-a898-2c59af7c7845 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:17:11 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:11.366295035Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=ebd3cb37-a009-43e6-a993-c1df0ab75503 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:17:11 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:11.366402671Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:17:11 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:11.375026942Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:17:11 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:11.375223899Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/79c7009b0015cbbddc9f51cce2661a741f20fa95ffd7263ceae4ca2044b34079/merged/etc/passwd: no such file or directory"
	Nov 02 14:17:11 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:11.375266099Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/79c7009b0015cbbddc9f51cce2661a741f20fa95ffd7263ceae4ca2044b34079/merged/etc/group: no such file or directory"
	Nov 02 14:17:11 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:11.375559526Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:17:11 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:11.411537674Z" level=info msg="Created container ca0fa38afc419a430fa931e4b8bb43eec21c0571611ee490561556d416005aec: kube-system/storage-provisioner/storage-provisioner" id=ebd3cb37-a009-43e6-a993-c1df0ab75503 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:17:11 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:11.414131703Z" level=info msg="Starting container: ca0fa38afc419a430fa931e4b8bb43eec21c0571611ee490561556d416005aec" id=32635540-bc18-441c-b2a1-786dc75abcf2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 14:17:11 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:11.41679199Z" level=info msg="Started container" PID=1668 containerID=ca0fa38afc419a430fa931e4b8bb43eec21c0571611ee490561556d416005aec description=kube-system/storage-provisioner/storage-provisioner id=32635540-bc18-441c-b2a1-786dc75abcf2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=88f67abef6f33cd10f32199c167e726126f7f24507c111601e752d4e34b22c31
	Nov 02 14:17:20 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:20.175968892Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:17:20 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:20.181696345Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:17:20 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:20.181866242Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 14:17:20 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:20.181954054Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:17:20 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:20.18706756Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:17:20 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:20.187103754Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 14:17:20 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:20.187127286Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:17:20 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:20.190263787Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:17:20 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:20.190348465Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 14:17:20 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:20.190379571Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:17:20 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:20.193708402Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:17:20 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:20.19374023Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 14:17:20 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:20.193765297Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:17:20 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:20.207108182Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:17:20 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:20.207316159Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	ca0fa38afc419       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           21 seconds ago       Running             storage-provisioner         2                   88f67abef6f33       storage-provisioner                                    kube-system
	4a101d9dab213       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago       Exited              dashboard-metrics-scraper   2                   3d9a6a625199e       dashboard-metrics-scraper-6ffb444bf9-5d5nz             kubernetes-dashboard
	83f8cc1376a11       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   34 seconds ago       Running             kubernetes-dashboard        0                   b10946822e6ab       kubernetes-dashboard-855c9754f9-b2vjd                  kubernetes-dashboard
	769ecf1950a78       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           53 seconds ago       Running             coredns                     1                   022462f687bad       coredns-66bc5c9577-lwp97                               kube-system
	1bf6d913ed0ed       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           53 seconds ago       Running             kube-proxy                  1                   30ce3864cf9e0       kube-proxy-jlf8q                                       kube-system
	9fd9ef276f481       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           53 seconds ago       Exited              storage-provisioner         1                   88f67abef6f33       storage-provisioner                                    kube-system
	4a42eeac35c46       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           53 seconds ago       Running             kindnet-cni                 1                   7e593dd947a15       kindnet-pd47j                                          kube-system
	4eaca312ddc32       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           53 seconds ago       Running             busybox                     1                   f42b227ee3b77       busybox                                                default
	f6bef86f73f59       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   4dc301b3cd742       kube-apiserver-default-k8s-diff-port-786183            kube-system
	6d9b69e73df50       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   a86685caa1aa4       etcd-default-k8s-diff-port-786183                      kube-system
	d53a1eafeb3bc       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   7e82a12866edd       kube-controller-manager-default-k8s-diff-port-786183   kube-system
	312cee2bec817       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   9f34ceeb8afb9       kube-scheduler-default-k8s-diff-port-786183            kube-system
	
	
	==> coredns [769ecf1950a7818c9246e6454bf6040a545b55f56f2b3971721d96cd5fe6397a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51463 - 12375 "HINFO IN 3440052553814858932.161808677061012789. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.017089078s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-786183
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-786183
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a
	                    minikube.k8s.io/name=default-k8s-diff-port-786183
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_02T14_15_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 02 Nov 2025 14:15:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-786183
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 02 Nov 2025 14:17:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 02 Nov 2025 14:17:09 +0000   Sun, 02 Nov 2025 14:15:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 02 Nov 2025 14:17:09 +0000   Sun, 02 Nov 2025 14:15:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 02 Nov 2025 14:17:09 +0000   Sun, 02 Nov 2025 14:15:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 02 Nov 2025 14:17:09 +0000   Sun, 02 Nov 2025 14:15:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-786183
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                0782cb70-5112-4773-81bc-acca336842b5
	  Boot ID:                    4c303cf4-c0b6-4c0d-aa1a-d7509feb15e7
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-lwp97                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m20s
	  kube-system                 etcd-default-k8s-diff-port-786183                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m25s
	  kube-system                 kindnet-pd47j                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m20s
	  kube-system                 kube-apiserver-default-k8s-diff-port-786183             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-786183    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-proxy-jlf8q                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-scheduler-default-k8s-diff-port-786183             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-5d5nz              0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-b2vjd                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m18s              kube-proxy       
	  Normal   Starting                 51s                kube-proxy       
	  Normal   NodeHasSufficientPID     2m25s              kubelet          Node default-k8s-diff-port-786183 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 2m25s              kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m25s              kubelet          Node default-k8s-diff-port-786183 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m25s              kubelet          Node default-k8s-diff-port-786183 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 2m25s              kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m21s              node-controller  Node default-k8s-diff-port-786183 event: Registered Node default-k8s-diff-port-786183 in Controller
	  Normal   NodeReady                98s                kubelet          Node default-k8s-diff-port-786183 status is now: NodeReady
	  Normal   Starting                 64s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 64s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  63s (x8 over 64s)  kubelet          Node default-k8s-diff-port-786183 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s (x8 over 64s)  kubelet          Node default-k8s-diff-port-786183 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s (x8 over 64s)  kubelet          Node default-k8s-diff-port-786183 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           49s                node-controller  Node default-k8s-diff-port-786183 event: Registered Node default-k8s-diff-port-786183 in Controller
	
	
	==> dmesg <==
	[  +3.515963] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:57] overlayfs: idmapped layers are currently not supported
	[ +24.836033] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:58] overlayfs: idmapped layers are currently not supported
	[ +23.362553] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:59] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:01] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:02] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:03] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:04] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:06] overlayfs: idmapped layers are currently not supported
	[ +50.469589] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 2 14:07] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:08] overlayfs: idmapped layers are currently not supported
	[ +11.089512] overlayfs: idmapped layers are currently not supported
	[ +33.821233] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:09] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:10] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:11] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:13] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:14] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:15] overlayfs: idmapped layers are currently not supported
	[ +29.099512] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:16] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:17] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6d9b69e73df509198b2e29494a4484507c8a14cccb6a2b6302b756a3c2183899] <==
	{"level":"warn","ts":"2025-11-02T14:16:35.580094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:16:35.604670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:16:35.711349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:16:35.754703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:16:35.797212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:16:35.843889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:16:35.863957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:16:35.919018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:16:35.949658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:16:36.044433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:16:36.088103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:16:36.175162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:16:36.203705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:16:36.275540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:16:36.336557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:16:36.353210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:16:36.478712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47690","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-02T14:16:39.270076Z","caller":"traceutil/trace.go:172","msg":"trace[403876028] linearizableReadLoop","detail":"{readStateIndex:501; appliedIndex:502; }","duration":"101.257338ms","start":"2025-11-02T14:16:39.168794Z","end":"2025-11-02T14:16:39.270052Z","steps":["trace[403876028] 'read index received'  (duration: 101.249206ms)","trace[403876028] 'applied index is now lower than readState.Index'  (duration: 6.63µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-02T14:16:39.275390Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"106.56725ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-edit\" limit:1 ","response":"range_response_count:1 size:2133"}
	{"level":"info","ts":"2025-11-02T14:16:39.275447Z","caller":"traceutil/trace.go:172","msg":"trace[863170845] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-edit; range_end:; response_count:1; response_revision:472; }","duration":"106.645068ms","start":"2025-11-02T14:16:39.168790Z","end":"2025-11-02T14:16:39.275435Z","steps":["trace[863170845] 'agreement among raft nodes before linearized reading'  (duration: 106.43924ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-02T14:16:39.283015Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.152272ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/default-k8s-diff-port-786183.1874363d0677b235\" limit:1 ","response":"range_response_count:1 size:793"}
	{"level":"info","ts":"2025-11-02T14:16:39.283079Z","caller":"traceutil/trace.go:172","msg":"trace[1231220929] range","detail":"{range_begin:/registry/events/default/default-k8s-diff-port-786183.1874363d0677b235; range_end:; response_count:1; response_revision:472; }","duration":"114.221302ms","start":"2025-11-02T14:16:39.168838Z","end":"2025-11-02T14:16:39.283059Z","steps":["trace[1231220929] 'agreement among raft nodes before linearized reading'  (duration: 114.049041ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-02T14:16:39.395118Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.732245ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-view\" limit:1 ","response":"range_response_count:1 size:2030"}
	{"level":"info","ts":"2025-11-02T14:16:39.395244Z","caller":"traceutil/trace.go:172","msg":"trace[1348302984] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-view; range_end:; response_count:1; response_revision:472; }","duration":"100.869001ms","start":"2025-11-02T14:16:39.294361Z","end":"2025-11-02T14:16:39.395230Z","steps":["trace[1348302984] 'range keys from in-memory index tree'  (duration: 100.479434ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-02T14:16:39.395620Z","caller":"traceutil/trace.go:172","msg":"trace[526266452] transaction","detail":"{read_only:false; response_revision:473; number_of_response:1; }","duration":"101.114731ms","start":"2025-11-02T14:16:39.294494Z","end":"2025-11-02T14:16:39.395609Z","steps":["trace[526266452] 'process raft request'  (duration: 59.803294ms)","trace[526266452] 'compare'  (duration: 40.670397ms)"],"step_count":2}
	
	
	==> kernel <==
	 14:17:33 up  3:00,  0 user,  load average: 3.58, 3.53, 3.04
	Linux default-k8s-diff-port-786183 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4a42eeac35c46466a833f6d9dd2f5560d7c96f116a6b52ffdf9b0a58425abe0f] <==
	I1102 14:16:39.863361       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1102 14:16:39.867737       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1102 14:16:39.867873       1 main.go:148] setting mtu 1500 for CNI 
	I1102 14:16:39.867886       1 main.go:178] kindnetd IP family: "ipv4"
	I1102 14:16:39.867901       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-02T14:16:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1102 14:16:40.187233       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1102 14:16:40.191879       1 controller.go:381] "Waiting for informer caches to sync"
	I1102 14:16:40.191977       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1102 14:16:40.192228       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1102 14:17:10.175741       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1102 14:17:10.187334       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1102 14:17:10.187437       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1102 14:17:10.192000       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1102 14:17:11.792365       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1102 14:17:11.792413       1 metrics.go:72] Registering metrics
	I1102 14:17:11.792470       1 controller.go:711] "Syncing nftables rules"
	I1102 14:17:20.175659       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1102 14:17:20.175699       1 main.go:301] handling current node
	I1102 14:17:30.183014       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1102 14:17:30.183054       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f6bef86f73f59250e354d0fdd9e49760329ba2e76d5a2c9140645b949b671c4d] <==
	I1102 14:16:38.727202       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1102 14:16:38.737736       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1102 14:16:38.738006       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1102 14:16:38.738055       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1102 14:16:38.738104       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1102 14:16:38.751079       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1102 14:16:38.751145       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1102 14:16:38.780359       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1102 14:16:38.782215       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1102 14:16:38.784644       1 aggregator.go:171] initial CRD sync complete...
	I1102 14:16:38.784697       1 autoregister_controller.go:144] Starting autoregister controller
	I1102 14:16:38.784705       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1102 14:16:38.784712       1 cache.go:39] Caches are synced for autoregister controller
	E1102 14:16:38.862284       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1102 14:16:38.994680       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1102 14:16:39.083686       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1102 14:16:40.312361       1 controller.go:667] quota admission added evaluator for: namespaces
	I1102 14:16:41.014171       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1102 14:16:41.255703       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1102 14:16:41.352376       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1102 14:16:41.787468       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.114.203"}
	I1102 14:16:41.861999       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.33.188"}
	I1102 14:16:44.911479       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1102 14:16:45.018230       1 controller.go:667] quota admission added evaluator for: endpoints
	I1102 14:16:45.138296       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [d53a1eafeb3bc7e2100e0bcf284f029edbffd71be60582127cbabe95881a86ac] <==
	I1102 14:16:44.660213       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1102 14:16:44.660257       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1102 14:16:44.660557       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1102 14:16:44.663914       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1102 14:16:44.677371       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1102 14:16:44.677401       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 14:16:44.677422       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1102 14:16:44.677724       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1102 14:16:44.677821       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-786183"
	I1102 14:16:44.677896       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1102 14:16:44.677441       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1102 14:16:44.689518       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 14:16:44.693025       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 14:16:44.693055       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1102 14:16:44.693062       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1102 14:16:44.701535       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1102 14:16:44.702945       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1102 14:16:44.702994       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1102 14:16:44.711347       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1102 14:16:44.711590       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1102 14:16:44.717054       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1102 14:16:44.717162       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1102 14:16:44.723047       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 14:16:44.723106       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1102 14:16:44.725264       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	
	
	==> kube-proxy [1bf6d913ed0ed0c57e76ab04a24da61ba22e4c320129f7d86e19d3853a33081b] <==
	I1102 14:16:41.654794       1 server_linux.go:53] "Using iptables proxy"
	I1102 14:16:42.147632       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1102 14:16:42.251447       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1102 14:16:42.251502       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1102 14:16:42.251645       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1102 14:16:42.295103       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 14:16:42.295169       1 server_linux.go:132] "Using iptables Proxier"
	I1102 14:16:42.300581       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1102 14:16:42.300981       1 server.go:527] "Version info" version="v1.34.1"
	I1102 14:16:42.301008       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 14:16:42.303733       1 config.go:200] "Starting service config controller"
	I1102 14:16:42.303755       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1102 14:16:42.303771       1 config.go:106] "Starting endpoint slice config controller"
	I1102 14:16:42.303776       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1102 14:16:42.303793       1 config.go:403] "Starting serviceCIDR config controller"
	I1102 14:16:42.303797       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1102 14:16:42.308637       1 config.go:309] "Starting node config controller"
	I1102 14:16:42.308661       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1102 14:16:42.308671       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1102 14:16:42.405073       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1102 14:16:42.405197       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1102 14:16:42.405280       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [312cee2bec817cdd2e35981ea4410dfbe7dc6c1e95635e12a5f8648c6f301ff1] <==
	I1102 14:16:35.020896       1 serving.go:386] Generated self-signed cert in-memory
	I1102 14:16:41.651025       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1102 14:16:41.651123       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 14:16:41.679478       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1102 14:16:41.679635       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1102 14:16:41.679666       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1102 14:16:41.679695       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1102 14:16:41.690733       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 14:16:41.702333       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 14:16:41.702443       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1102 14:16:41.702484       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1102 14:16:41.782820       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1102 14:16:41.803180       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1102 14:16:41.803314       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 02 14:16:45 default-k8s-diff-port-786183 kubelet[806]: I1102 14:16:45.457870     806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a6ab03d5-ff18-456d-9305-69166308109a-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-b2vjd\" (UID: \"a6ab03d5-ff18-456d-9305-69166308109a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b2vjd"
	Nov 02 14:16:45 default-k8s-diff-port-786183 kubelet[806]: W1102 14:16:45.701249     806 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/cf96e33bc393b6a3d6d55863f66411b90a67bf689beea00fe018be1f7c4b996e/crio-3d9a6a625199ea9da1c3db2e16b3e784cae68ba75343dc97713b09693ebec246 WatchSource:0}: Error finding container 3d9a6a625199ea9da1c3db2e16b3e784cae68ba75343dc97713b09693ebec246: Status 404 returned error can't find the container with id 3d9a6a625199ea9da1c3db2e16b3e784cae68ba75343dc97713b09693ebec246
	Nov 02 14:16:45 default-k8s-diff-port-786183 kubelet[806]: W1102 14:16:45.716514     806 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/cf96e33bc393b6a3d6d55863f66411b90a67bf689beea00fe018be1f7c4b996e/crio-b10946822e6ab58f40b565d9fd8c93b069f7a1dcc7f072642bb6a98796cec970 WatchSource:0}: Error finding container b10946822e6ab58f40b565d9fd8c93b069f7a1dcc7f072642bb6a98796cec970: Status 404 returned error can't find the container with id b10946822e6ab58f40b565d9fd8c93b069f7a1dcc7f072642bb6a98796cec970
	Nov 02 14:16:46 default-k8s-diff-port-786183 kubelet[806]: I1102 14:16:46.364967     806 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 02 14:16:53 default-k8s-diff-port-786183 kubelet[806]: I1102 14:16:53.292258     806 scope.go:117] "RemoveContainer" containerID="916f65f99582f5883092fbea95c3b3bc3b048b8af991c308a9b12ed1f15c4430"
	Nov 02 14:16:54 default-k8s-diff-port-786183 kubelet[806]: I1102 14:16:54.303635     806 scope.go:117] "RemoveContainer" containerID="916f65f99582f5883092fbea95c3b3bc3b048b8af991c308a9b12ed1f15c4430"
	Nov 02 14:16:54 default-k8s-diff-port-786183 kubelet[806]: I1102 14:16:54.304456     806 scope.go:117] "RemoveContainer" containerID="3bbc09f99857f45d85ce0127faed5622ac9ff8b5acb27aae6e0560fa7be0b98a"
	Nov 02 14:16:54 default-k8s-diff-port-786183 kubelet[806]: E1102 14:16:54.304735     806 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5d5nz_kubernetes-dashboard(ff0353dc-dd91-431a-93ff-9b3e79e418c3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5d5nz" podUID="ff0353dc-dd91-431a-93ff-9b3e79e418c3"
	Nov 02 14:16:55 default-k8s-diff-port-786183 kubelet[806]: I1102 14:16:55.311038     806 scope.go:117] "RemoveContainer" containerID="3bbc09f99857f45d85ce0127faed5622ac9ff8b5acb27aae6e0560fa7be0b98a"
	Nov 02 14:16:55 default-k8s-diff-port-786183 kubelet[806]: E1102 14:16:55.311201     806 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5d5nz_kubernetes-dashboard(ff0353dc-dd91-431a-93ff-9b3e79e418c3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5d5nz" podUID="ff0353dc-dd91-431a-93ff-9b3e79e418c3"
	Nov 02 14:16:56 default-k8s-diff-port-786183 kubelet[806]: I1102 14:16:56.317950     806 scope.go:117] "RemoveContainer" containerID="3bbc09f99857f45d85ce0127faed5622ac9ff8b5acb27aae6e0560fa7be0b98a"
	Nov 02 14:16:56 default-k8s-diff-port-786183 kubelet[806]: E1102 14:16:56.318113     806 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5d5nz_kubernetes-dashboard(ff0353dc-dd91-431a-93ff-9b3e79e418c3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5d5nz" podUID="ff0353dc-dd91-431a-93ff-9b3e79e418c3"
	Nov 02 14:17:08 default-k8s-diff-port-786183 kubelet[806]: I1102 14:17:08.891729     806 scope.go:117] "RemoveContainer" containerID="3bbc09f99857f45d85ce0127faed5622ac9ff8b5acb27aae6e0560fa7be0b98a"
	Nov 02 14:17:09 default-k8s-diff-port-786183 kubelet[806]: I1102 14:17:09.354398     806 scope.go:117] "RemoveContainer" containerID="4a101d9dab2131f43fbd1503d716e8c8293362237381ea752445b7f6af83d5b9"
	Nov 02 14:17:09 default-k8s-diff-port-786183 kubelet[806]: E1102 14:17:09.354804     806 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5d5nz_kubernetes-dashboard(ff0353dc-dd91-431a-93ff-9b3e79e418c3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5d5nz" podUID="ff0353dc-dd91-431a-93ff-9b3e79e418c3"
	Nov 02 14:17:09 default-k8s-diff-port-786183 kubelet[806]: I1102 14:17:09.355418     806 scope.go:117] "RemoveContainer" containerID="3bbc09f99857f45d85ce0127faed5622ac9ff8b5acb27aae6e0560fa7be0b98a"
	Nov 02 14:17:09 default-k8s-diff-port-786183 kubelet[806]: I1102 14:17:09.386447     806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b2vjd" podStartSLOduration=11.654051759 podStartE2EDuration="24.386429732s" podCreationTimestamp="2025-11-02 14:16:45 +0000 UTC" firstStartedPulling="2025-11-02 14:16:45.720909068 +0000 UTC m=+15.979746182" lastFinishedPulling="2025-11-02 14:16:58.453287042 +0000 UTC m=+28.712124155" observedRunningTime="2025-11-02 14:16:59.340591638 +0000 UTC m=+29.599428760" watchObservedRunningTime="2025-11-02 14:17:09.386429732 +0000 UTC m=+39.645266854"
	Nov 02 14:17:11 default-k8s-diff-port-786183 kubelet[806]: I1102 14:17:11.362660     806 scope.go:117] "RemoveContainer" containerID="9fd9ef276f481fcadafbda761a8afadc807a2f1415cbc32fad53c6cf98b7595e"
	Nov 02 14:17:15 default-k8s-diff-port-786183 kubelet[806]: I1102 14:17:15.649710     806 scope.go:117] "RemoveContainer" containerID="4a101d9dab2131f43fbd1503d716e8c8293362237381ea752445b7f6af83d5b9"
	Nov 02 14:17:15 default-k8s-diff-port-786183 kubelet[806]: E1102 14:17:15.650344     806 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5d5nz_kubernetes-dashboard(ff0353dc-dd91-431a-93ff-9b3e79e418c3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5d5nz" podUID="ff0353dc-dd91-431a-93ff-9b3e79e418c3"
	Nov 02 14:17:26 default-k8s-diff-port-786183 kubelet[806]: I1102 14:17:26.891990     806 scope.go:117] "RemoveContainer" containerID="4a101d9dab2131f43fbd1503d716e8c8293362237381ea752445b7f6af83d5b9"
	Nov 02 14:17:26 default-k8s-diff-port-786183 kubelet[806]: E1102 14:17:26.892616     806 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5d5nz_kubernetes-dashboard(ff0353dc-dd91-431a-93ff-9b3e79e418c3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5d5nz" podUID="ff0353dc-dd91-431a-93ff-9b3e79e418c3"
	Nov 02 14:17:30 default-k8s-diff-port-786183 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 02 14:17:30 default-k8s-diff-port-786183 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 02 14:17:30 default-k8s-diff-port-786183 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [83f8cc1376a117417d77c447e6abdf61ce016b69c382db2babe916d2eb2ab76b] <==
	2025/11/02 14:16:58 Using namespace: kubernetes-dashboard
	2025/11/02 14:16:58 Using in-cluster config to connect to apiserver
	2025/11/02 14:16:58 Using secret token for csrf signing
	2025/11/02 14:16:58 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/02 14:16:58 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/02 14:16:58 Successful initial request to the apiserver, version: v1.34.1
	2025/11/02 14:16:58 Generating JWE encryption key
	2025/11/02 14:16:58 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/02 14:16:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/02 14:16:58 Initializing JWE encryption key from synchronized object
	2025/11/02 14:16:58 Creating in-cluster Sidecar client
	2025/11/02 14:16:58 Serving insecurely on HTTP port: 9090
	2025/11/02 14:16:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/02 14:17:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/02 14:16:58 Starting overwatch
	
	
	==> storage-provisioner [9fd9ef276f481fcadafbda761a8afadc807a2f1415cbc32fad53c6cf98b7595e] <==
	I1102 14:16:40.993901       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1102 14:17:10.996107       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ca0fa38afc419a430fa931e4b8bb43eec21c0571611ee490561556d416005aec] <==
	I1102 14:17:11.466789       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1102 14:17:11.487505       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1102 14:17:11.487560       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1102 14:17:11.492740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:17:14.947681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:17:19.207567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:17:22.806004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:17:25.864230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:17:28.886323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:17:28.892192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1102 14:17:28.892405       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1102 14:17:28.892592       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-786183_aedb59e0-9cb9-4033-8337-c662da023eb4!
	I1102 14:17:28.899226       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f08a39b4-71c9-422d-9d61-86036126fe6f", APIVersion:"v1", ResourceVersion:"656", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-786183_aedb59e0-9cb9-4033-8337-c662da023eb4 became leader
	W1102 14:17:28.899423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:17:28.910782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1102 14:17:28.995384       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-786183_aedb59e0-9cb9-4033-8337-c662da023eb4!
	W1102 14:17:30.913658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:17:30.918412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:17:32.922339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:17:32.927234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-786183 -n default-k8s-diff-port-786183
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-786183 -n default-k8s-diff-port-786183: exit status 2 (469.276084ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-786183 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-786183
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-786183:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cf96e33bc393b6a3d6d55863f66411b90a67bf689beea00fe018be1f7c4b996e",
	        "Created": "2025-11-02T14:14:40.799097955Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 496615,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-02T14:16:22.150016569Z",
	            "FinishedAt": "2025-11-02T14:16:21.322466151Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/cf96e33bc393b6a3d6d55863f66411b90a67bf689beea00fe018be1f7c4b996e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cf96e33bc393b6a3d6d55863f66411b90a67bf689beea00fe018be1f7c4b996e/hostname",
	        "HostsPath": "/var/lib/docker/containers/cf96e33bc393b6a3d6d55863f66411b90a67bf689beea00fe018be1f7c4b996e/hosts",
	        "LogPath": "/var/lib/docker/containers/cf96e33bc393b6a3d6d55863f66411b90a67bf689beea00fe018be1f7c4b996e/cf96e33bc393b6a3d6d55863f66411b90a67bf689beea00fe018be1f7c4b996e-json.log",
	        "Name": "/default-k8s-diff-port-786183",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-786183:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-786183",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cf96e33bc393b6a3d6d55863f66411b90a67bf689beea00fe018be1f7c4b996e",
	                "LowerDir": "/var/lib/docker/overlay2/0cc4afb15e6b9077b7bd9ca2486b4ac42578c2a830d4d2dac09a5efd27fb8673-init/diff:/var/lib/docker/overlay2/53058afe6acd7391f639f551e07f446f6a39a991b699aea18507cb21a1652e97/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0cc4afb15e6b9077b7bd9ca2486b4ac42578c2a830d4d2dac09a5efd27fb8673/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0cc4afb15e6b9077b7bd9ca2486b4ac42578c2a830d4d2dac09a5efd27fb8673/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0cc4afb15e6b9077b7bd9ca2486b4ac42578c2a830d4d2dac09a5efd27fb8673/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-786183",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-786183/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-786183",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-786183",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-786183",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "11b00f83d16db4f0c0b86e906c4dec09edd6d70b7dfe467bcf0592ddad6aa96f",
	            "SandboxKey": "/var/run/docker/netns/11b00f83d16d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-786183": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "22:21:1c:bb:67:d6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "eb820b490718d17822d92cba10b61f1c2ec01866da1013536864f8ac5224c699",
	                    "EndpointID": "318ba17067f423805d27413ba08281f97485626f7e6dd1c198d1566d49927574",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-786183",
	                        "cf96e33bc393"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-786183 -n default-k8s-diff-port-786183
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-786183 -n default-k8s-diff-port-786183: exit status 2 (414.200283ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-786183 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-786183 logs -n 25: (1.512730076s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ no-preload-150469 image list --format=json                                                                                                                                                                                                    │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │ 02 Nov 25 14:14 UTC │
	│ pause   │ -p no-preload-150469 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │                     │
	│ delete  │ -p no-preload-150469                                                                                                                                                                                                                          │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │ 02 Nov 25 14:14 UTC │
	│ delete  │ -p no-preload-150469                                                                                                                                                                                                                          │ no-preload-150469            │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │ 02 Nov 25 14:14 UTC │
	│ delete  │ -p disable-driver-mounts-720030                                                                                                                                                                                                               │ disable-driver-mounts-720030 │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │ 02 Nov 25 14:14 UTC │
	│ start   │ -p default-k8s-diff-port-786183 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │ 02 Nov 25 14:15 UTC │
	│ addons  │ enable metrics-server -p embed-certs-955646 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:15 UTC │                     │
	│ stop    │ -p embed-certs-955646 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:15 UTC │ 02 Nov 25 14:15 UTC │
	│ addons  │ enable dashboard -p embed-certs-955646 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:15 UTC │ 02 Nov 25 14:15 UTC │
	│ start   │ -p embed-certs-955646 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:15 UTC │ 02 Nov 25 14:16 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-786183 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-786183 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │ 02 Nov 25 14:16 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-786183 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │ 02 Nov 25 14:16 UTC │
	│ start   │ -p default-k8s-diff-port-786183 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │ 02 Nov 25 14:17 UTC │
	│ image   │ embed-certs-955646 image list --format=json                                                                                                                                                                                                   │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │ 02 Nov 25 14:16 UTC │
	│ pause   │ -p embed-certs-955646 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │                     │
	│ delete  │ -p embed-certs-955646                                                                                                                                                                                                                         │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │ 02 Nov 25 14:16 UTC │
	│ delete  │ -p embed-certs-955646                                                                                                                                                                                                                         │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │ 02 Nov 25 14:16 UTC │
	│ start   │ -p newest-cni-352233 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-352233            │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │ 02 Nov 25 14:17 UTC │
	│ addons  │ enable metrics-server -p newest-cni-352233 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-352233            │ jenkins │ v1.37.0 │ 02 Nov 25 14:17 UTC │                     │
	│ stop    │ -p newest-cni-352233 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-352233            │ jenkins │ v1.37.0 │ 02 Nov 25 14:17 UTC │ 02 Nov 25 14:17 UTC │
	│ addons  │ enable dashboard -p newest-cni-352233 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-352233            │ jenkins │ v1.37.0 │ 02 Nov 25 14:17 UTC │ 02 Nov 25 14:17 UTC │
	│ start   │ -p newest-cni-352233 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-352233            │ jenkins │ v1.37.0 │ 02 Nov 25 14:17 UTC │                     │
	│ image   │ default-k8s-diff-port-786183 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:17 UTC │ 02 Nov 25 14:17 UTC │
	│ pause   │ -p default-k8s-diff-port-786183 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 14:17:29
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 14:17:29.539833  503184 out.go:360] Setting OutFile to fd 1 ...
	I1102 14:17:29.540010  503184 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:17:29.540070  503184 out.go:374] Setting ErrFile to fd 2...
	I1102 14:17:29.540079  503184 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:17:29.540467  503184 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 14:17:29.540939  503184 out.go:368] Setting JSON to false
	I1102 14:17:29.541872  503184 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10802,"bootTime":1762082248,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1102 14:17:29.541952  503184 start.go:143] virtualization:  
	I1102 14:17:29.546081  503184 out.go:179] * [newest-cni-352233] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1102 14:17:29.548553  503184 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 14:17:29.548760  503184 notify.go:221] Checking for updates...
	I1102 14:17:29.554236  503184 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 14:17:29.557305  503184 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-293314/kubeconfig
	I1102 14:17:29.561055  503184 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-293314/.minikube
	I1102 14:17:29.564062  503184 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1102 14:17:29.567037  503184 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 14:17:29.570343  503184 config.go:182] Loaded profile config "newest-cni-352233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:17:29.571020  503184 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 14:17:29.621681  503184 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1102 14:17:29.621796  503184 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:17:29.727988  503184 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-02 14:17:29.717591752 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:17:29.728092  503184 docker.go:319] overlay module found
	I1102 14:17:29.731064  503184 out.go:179] * Using the docker driver based on existing profile
	I1102 14:17:29.736126  503184 start.go:309] selected driver: docker
	I1102 14:17:29.736146  503184 start.go:930] validating driver "docker" against &{Name:newest-cni-352233 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-352233 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 14:17:29.736238  503184 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 14:17:29.736888  503184 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:17:29.814582  503184 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-02 14:17:29.802583381 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:17:29.814967  503184 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1102 14:17:29.814994  503184 cni.go:84] Creating CNI manager for ""
	I1102 14:17:29.815049  503184 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 14:17:29.815078  503184 start.go:353] cluster config:
	{Name:newest-cni-352233 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-352233 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 14:17:29.818309  503184 out.go:179] * Starting "newest-cni-352233" primary control-plane node in "newest-cni-352233" cluster
	I1102 14:17:29.821082  503184 cache.go:124] Beginning downloading kic base image for docker with crio
	I1102 14:17:29.823906  503184 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1102 14:17:29.826690  503184 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 14:17:29.826736  503184 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1102 14:17:29.826741  503184 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1102 14:17:29.826772  503184 cache.go:59] Caching tarball of preloaded images
	I1102 14:17:29.826858  503184 preload.go:233] Found /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1102 14:17:29.826867  503184 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1102 14:17:29.826995  503184 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/newest-cni-352233/config.json ...
	I1102 14:17:29.854097  503184 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1102 14:17:29.854138  503184 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1102 14:17:29.854157  503184 cache.go:233] Successfully downloaded all kic artifacts
	I1102 14:17:29.854184  503184 start.go:360] acquireMachinesLock for newest-cni-352233: {Name:mk656133c677274089939931d0ae5b5b59bd0afb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 14:17:29.854534  503184 start.go:364] duration metric: took 289.538µs to acquireMachinesLock for "newest-cni-352233"
	I1102 14:17:29.854975  503184 start.go:96] Skipping create...Using existing machine configuration
	I1102 14:17:29.854997  503184 fix.go:54] fixHost starting: 
	I1102 14:17:29.855345  503184 cli_runner.go:164] Run: docker container inspect newest-cni-352233 --format={{.State.Status}}
	I1102 14:17:29.878904  503184 fix.go:112] recreateIfNeeded on newest-cni-352233: state=Stopped err=<nil>
	W1102 14:17:29.878947  503184 fix.go:138] unexpected machine state, will restart: <nil>
	I1102 14:17:29.882503  503184 out.go:252] * Restarting existing docker container for "newest-cni-352233" ...
	I1102 14:17:29.882601  503184 cli_runner.go:164] Run: docker start newest-cni-352233
	I1102 14:17:30.291543  503184 cli_runner.go:164] Run: docker container inspect newest-cni-352233 --format={{.State.Status}}
	I1102 14:17:30.339878  503184 kic.go:430] container "newest-cni-352233" state is running.
	I1102 14:17:30.340249  503184 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-352233
	I1102 14:17:30.384997  503184 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/newest-cni-352233/config.json ...
	I1102 14:17:30.385249  503184 machine.go:94] provisionDockerMachine start ...
	I1102 14:17:30.385314  503184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-352233
	I1102 14:17:30.413698  503184 main.go:143] libmachine: Using SSH client type: native
	I1102 14:17:30.414096  503184 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33466 <nil> <nil>}
	I1102 14:17:30.414109  503184 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 14:17:30.414817  503184 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1102 14:17:33.575022  503184 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-352233
	
	I1102 14:17:33.575044  503184 ubuntu.go:182] provisioning hostname "newest-cni-352233"
	I1102 14:17:33.575104  503184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-352233
	I1102 14:17:33.599339  503184 main.go:143] libmachine: Using SSH client type: native
	I1102 14:17:33.599928  503184 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33466 <nil> <nil>}
	I1102 14:17:33.599945  503184 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-352233 && echo "newest-cni-352233" | sudo tee /etc/hostname
	I1102 14:17:33.773202  503184 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-352233
	
	I1102 14:17:33.773294  503184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-352233
	I1102 14:17:33.796251  503184 main.go:143] libmachine: Using SSH client type: native
	I1102 14:17:33.796552  503184 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33466 <nil> <nil>}
	I1102 14:17:33.796575  503184 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-352233' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-352233/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-352233' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 14:17:33.981361  503184 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 14:17:33.981396  503184 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-293314/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-293314/.minikube}
	I1102 14:17:33.981464  503184 ubuntu.go:190] setting up certificates
	I1102 14:17:33.981475  503184 provision.go:84] configureAuth start
	I1102 14:17:33.981547  503184 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-352233
	I1102 14:17:34.002456  503184 provision.go:143] copyHostCerts
	I1102 14:17:34.002520  503184 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem, removing ...
	I1102 14:17:34.002536  503184 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem
	I1102 14:17:34.002644  503184 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem (1082 bytes)
	I1102 14:17:34.002748  503184 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem, removing ...
	I1102 14:17:34.002754  503184 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem
	I1102 14:17:34.002782  503184 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem (1123 bytes)
	I1102 14:17:34.002833  503184 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem, removing ...
	I1102 14:17:34.002837  503184 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem
	I1102 14:17:34.002860  503184 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem (1675 bytes)
	I1102 14:17:34.002903  503184 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem org=jenkins.newest-cni-352233 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-352233]
	I1102 14:17:34.281541  503184 provision.go:177] copyRemoteCerts
	I1102 14:17:34.281649  503184 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 14:17:34.281718  503184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-352233
	I1102 14:17:34.307002  503184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/newest-cni-352233/id_rsa Username:docker}
	I1102 14:17:34.419505  503184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1102 14:17:34.454564  503184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1102 14:17:34.475410  503184 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1102 14:17:34.494793  503184 provision.go:87] duration metric: took 513.301012ms to configureAuth
	I1102 14:17:34.494816  503184 ubuntu.go:206] setting minikube options for container-runtime
	I1102 14:17:34.495002  503184 config.go:182] Loaded profile config "newest-cni-352233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:17:34.495105  503184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-352233
	I1102 14:17:34.521616  503184 main.go:143] libmachine: Using SSH client type: native
	I1102 14:17:34.521929  503184 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33466 <nil> <nil>}
	I1102 14:17:34.521946  503184 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	
	
	==> CRI-O <==
	Nov 02 14:17:11 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:11.365108676Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=13f25ce4-40d6-4497-a898-2c59af7c7845 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:17:11 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:11.366295035Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=ebd3cb37-a009-43e6-a993-c1df0ab75503 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:17:11 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:11.366402671Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:17:11 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:11.375026942Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:17:11 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:11.375223899Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/79c7009b0015cbbddc9f51cce2661a741f20fa95ffd7263ceae4ca2044b34079/merged/etc/passwd: no such file or directory"
	Nov 02 14:17:11 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:11.375266099Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/79c7009b0015cbbddc9f51cce2661a741f20fa95ffd7263ceae4ca2044b34079/merged/etc/group: no such file or directory"
	Nov 02 14:17:11 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:11.375559526Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:17:11 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:11.411537674Z" level=info msg="Created container ca0fa38afc419a430fa931e4b8bb43eec21c0571611ee490561556d416005aec: kube-system/storage-provisioner/storage-provisioner" id=ebd3cb37-a009-43e6-a993-c1df0ab75503 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:17:11 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:11.414131703Z" level=info msg="Starting container: ca0fa38afc419a430fa931e4b8bb43eec21c0571611ee490561556d416005aec" id=32635540-bc18-441c-b2a1-786dc75abcf2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 14:17:11 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:11.41679199Z" level=info msg="Started container" PID=1668 containerID=ca0fa38afc419a430fa931e4b8bb43eec21c0571611ee490561556d416005aec description=kube-system/storage-provisioner/storage-provisioner id=32635540-bc18-441c-b2a1-786dc75abcf2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=88f67abef6f33cd10f32199c167e726126f7f24507c111601e752d4e34b22c31
	Nov 02 14:17:20 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:20.175968892Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:17:20 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:20.181696345Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:17:20 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:20.181866242Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 14:17:20 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:20.181954054Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:17:20 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:20.18706756Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:17:20 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:20.187103754Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 14:17:20 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:20.187127286Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:17:20 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:20.190263787Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:17:20 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:20.190348465Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 14:17:20 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:20.190379571Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:17:20 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:20.193708402Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:17:20 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:20.19374023Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 02 14:17:20 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:20.193765297Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 02 14:17:20 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:20.207108182Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 02 14:17:20 default-k8s-diff-port-786183 crio[680]: time="2025-11-02T14:17:20.207316159Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	ca0fa38afc419       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           24 seconds ago       Running             storage-provisioner         2                   88f67abef6f33       storage-provisioner                                    kube-system
	4a101d9dab213       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago       Exited              dashboard-metrics-scraper   2                   3d9a6a625199e       dashboard-metrics-scraper-6ffb444bf9-5d5nz             kubernetes-dashboard
	83f8cc1376a11       docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf   37 seconds ago       Running             kubernetes-dashboard        0                   b10946822e6ab       kubernetes-dashboard-855c9754f9-b2vjd                  kubernetes-dashboard
	769ecf1950a78       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                           55 seconds ago       Running             coredns                     1                   022462f687bad       coredns-66bc5c9577-lwp97                               kube-system
	1bf6d913ed0ed       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9                                           55 seconds ago       Running             kube-proxy                  1                   30ce3864cf9e0       kube-proxy-jlf8q                                       kube-system
	9fd9ef276f481       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                           55 seconds ago       Exited              storage-provisioner         1                   88f67abef6f33       storage-provisioner                                    kube-system
	4a42eeac35c46       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                           56 seconds ago       Running             kindnet-cni                 1                   7e593dd947a15       kindnet-pd47j                                          kube-system
	4eaca312ddc32       1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c                                           56 seconds ago       Running             busybox                     1                   f42b227ee3b77       busybox                                                default
	f6bef86f73f59       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196                                           About a minute ago   Running             kube-apiserver              1                   4dc301b3cd742       kube-apiserver-default-k8s-diff-port-786183            kube-system
	6d9b69e73df50       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e                                           About a minute ago   Running             etcd                        1                   a86685caa1aa4       etcd-default-k8s-diff-port-786183                      kube-system
	d53a1eafeb3bc       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a                                           About a minute ago   Running             kube-controller-manager     1                   7e82a12866edd       kube-controller-manager-default-k8s-diff-port-786183   kube-system
	312cee2bec817       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0                                           About a minute ago   Running             kube-scheduler              1                   9f34ceeb8afb9       kube-scheduler-default-k8s-diff-port-786183            kube-system
	
	
	==> coredns [769ecf1950a7818c9246e6454bf6040a545b55f56f2b3971721d96cd5fe6397a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51463 - 12375 "HINFO IN 3440052553814858932.161808677061012789. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.017089078s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-786183
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-786183
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a
	                    minikube.k8s.io/name=default-k8s-diff-port-786183
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_02T14_15_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 02 Nov 2025 14:15:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-786183
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 02 Nov 2025 14:17:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 02 Nov 2025 14:17:09 +0000   Sun, 02 Nov 2025 14:15:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 02 Nov 2025 14:17:09 +0000   Sun, 02 Nov 2025 14:15:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 02 Nov 2025 14:17:09 +0000   Sun, 02 Nov 2025 14:15:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 02 Nov 2025 14:17:09 +0000   Sun, 02 Nov 2025 14:15:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-786183
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                0782cb70-5112-4773-81bc-acca336842b5
	  Boot ID:                    4c303cf4-c0b6-4c0d-aa1a-d7509feb15e7
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-lwp97                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m22s
	  kube-system                 etcd-default-k8s-diff-port-786183                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m27s
	  kube-system                 kindnet-pd47j                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m22s
	  kube-system                 kube-apiserver-default-k8s-diff-port-786183             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-786183    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 kube-proxy-jlf8q                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-scheduler-default-k8s-diff-port-786183             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m27s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-5d5nz              0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-b2vjd                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 2m20s              kube-proxy       
	  Normal   Starting                 53s                kube-proxy       
	  Normal   NodeHasSufficientPID     2m27s              kubelet          Node default-k8s-diff-port-786183 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 2m27s              kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m27s              kubelet          Node default-k8s-diff-port-786183 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m27s              kubelet          Node default-k8s-diff-port-786183 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 2m27s              kubelet          Starting kubelet.
	  Normal   RegisteredNode           2m23s              node-controller  Node default-k8s-diff-port-786183 event: Registered Node default-k8s-diff-port-786183 in Controller
	  Normal   NodeReady                100s               kubelet          Node default-k8s-diff-port-786183 status is now: NodeReady
	  Normal   Starting                 66s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 66s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  65s (x8 over 66s)  kubelet          Node default-k8s-diff-port-786183 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    65s (x8 over 66s)  kubelet          Node default-k8s-diff-port-786183 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     65s (x8 over 66s)  kubelet          Node default-k8s-diff-port-786183 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           51s                node-controller  Node default-k8s-diff-port-786183 event: Registered Node default-k8s-diff-port-786183 in Controller
	
	
	==> dmesg <==
	[  +3.515963] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:57] overlayfs: idmapped layers are currently not supported
	[ +24.836033] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:58] overlayfs: idmapped layers are currently not supported
	[ +23.362553] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:59] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:01] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:02] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:03] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:04] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:06] overlayfs: idmapped layers are currently not supported
	[ +50.469589] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 2 14:07] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:08] overlayfs: idmapped layers are currently not supported
	[ +11.089512] overlayfs: idmapped layers are currently not supported
	[ +33.821233] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:09] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:10] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:11] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:13] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:14] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:15] overlayfs: idmapped layers are currently not supported
	[ +29.099512] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:16] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:17] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [6d9b69e73df509198b2e29494a4484507c8a14cccb6a2b6302b756a3c2183899] <==
	{"level":"warn","ts":"2025-11-02T14:16:35.580094Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:16:35.604670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47442","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:16:35.711349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:16:35.754703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:16:35.797212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:16:35.843889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:16:35.863957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:16:35.919018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:16:35.949658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:16:36.044433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:16:36.088103Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:16:36.175162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:16:36.203705Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:16:36.275540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:16:36.336557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:16:36.353210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:16:36.478712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:47690","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-02T14:16:39.270076Z","caller":"traceutil/trace.go:172","msg":"trace[403876028] linearizableReadLoop","detail":"{readStateIndex:501; appliedIndex:502; }","duration":"101.257338ms","start":"2025-11-02T14:16:39.168794Z","end":"2025-11-02T14:16:39.270052Z","steps":["trace[403876028] 'read index received'  (duration: 101.249206ms)","trace[403876028] 'applied index is now lower than readState.Index'  (duration: 6.63µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-02T14:16:39.275390Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"106.56725ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-edit\" limit:1 ","response":"range_response_count:1 size:2133"}
	{"level":"info","ts":"2025-11-02T14:16:39.275447Z","caller":"traceutil/trace.go:172","msg":"trace[863170845] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-edit; range_end:; response_count:1; response_revision:472; }","duration":"106.645068ms","start":"2025-11-02T14:16:39.168790Z","end":"2025-11-02T14:16:39.275435Z","steps":["trace[863170845] 'agreement among raft nodes before linearized reading'  (duration: 106.43924ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-02T14:16:39.283015Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.152272ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/default-k8s-diff-port-786183.1874363d0677b235\" limit:1 ","response":"range_response_count:1 size:793"}
	{"level":"info","ts":"2025-11-02T14:16:39.283079Z","caller":"traceutil/trace.go:172","msg":"trace[1231220929] range","detail":"{range_begin:/registry/events/default/default-k8s-diff-port-786183.1874363d0677b235; range_end:; response_count:1; response_revision:472; }","duration":"114.221302ms","start":"2025-11-02T14:16:39.168838Z","end":"2025-11-02T14:16:39.283059Z","steps":["trace[1231220929] 'agreement among raft nodes before linearized reading'  (duration: 114.049041ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-02T14:16:39.395118Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.732245ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-view\" limit:1 ","response":"range_response_count:1 size:2030"}
	{"level":"info","ts":"2025-11-02T14:16:39.395244Z","caller":"traceutil/trace.go:172","msg":"trace[1348302984] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-view; range_end:; response_count:1; response_revision:472; }","duration":"100.869001ms","start":"2025-11-02T14:16:39.294361Z","end":"2025-11-02T14:16:39.395230Z","steps":["trace[1348302984] 'range keys from in-memory index tree'  (duration: 100.479434ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-02T14:16:39.395620Z","caller":"traceutil/trace.go:172","msg":"trace[526266452] transaction","detail":"{read_only:false; response_revision:473; number_of_response:1; }","duration":"101.114731ms","start":"2025-11-02T14:16:39.294494Z","end":"2025-11-02T14:16:39.395609Z","steps":["trace[526266452] 'process raft request'  (duration: 59.803294ms)","trace[526266452] 'compare'  (duration: 40.670397ms)"],"step_count":2}
	
	
	==> kernel <==
	 14:17:35 up  3:00,  0 user,  load average: 3.58, 3.53, 3.04
	Linux default-k8s-diff-port-786183 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4a42eeac35c46466a833f6d9dd2f5560d7c96f116a6b52ffdf9b0a58425abe0f] <==
	I1102 14:16:39.863361       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1102 14:16:39.867737       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1102 14:16:39.867873       1 main.go:148] setting mtu 1500 for CNI 
	I1102 14:16:39.867886       1 main.go:178] kindnetd IP family: "ipv4"
	I1102 14:16:39.867901       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-02T14:16:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1102 14:16:40.187233       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1102 14:16:40.191879       1 controller.go:381] "Waiting for informer caches to sync"
	I1102 14:16:40.191977       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1102 14:16:40.192228       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1102 14:17:10.175741       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1102 14:17:10.187334       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1102 14:17:10.187437       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1102 14:17:10.192000       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1102 14:17:11.792365       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1102 14:17:11.792413       1 metrics.go:72] Registering metrics
	I1102 14:17:11.792470       1 controller.go:711] "Syncing nftables rules"
	I1102 14:17:20.175659       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1102 14:17:20.175699       1 main.go:301] handling current node
	I1102 14:17:30.183014       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1102 14:17:30.183054       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f6bef86f73f59250e354d0fdd9e49760329ba2e76d5a2c9140645b949b671c4d] <==
	I1102 14:16:38.727202       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1102 14:16:38.737736       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1102 14:16:38.738006       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1102 14:16:38.738055       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1102 14:16:38.738104       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1102 14:16:38.751079       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1102 14:16:38.751145       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1102 14:16:38.780359       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1102 14:16:38.782215       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1102 14:16:38.784644       1 aggregator.go:171] initial CRD sync complete...
	I1102 14:16:38.784697       1 autoregister_controller.go:144] Starting autoregister controller
	I1102 14:16:38.784705       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1102 14:16:38.784712       1 cache.go:39] Caches are synced for autoregister controller
	E1102 14:16:38.862284       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1102 14:16:38.994680       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1102 14:16:39.083686       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1102 14:16:40.312361       1 controller.go:667] quota admission added evaluator for: namespaces
	I1102 14:16:41.014171       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1102 14:16:41.255703       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1102 14:16:41.352376       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1102 14:16:41.787468       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.114.203"}
	I1102 14:16:41.861999       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.33.188"}
	I1102 14:16:44.911479       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1102 14:16:45.018230       1 controller.go:667] quota admission added evaluator for: endpoints
	I1102 14:16:45.138296       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [d53a1eafeb3bc7e2100e0bcf284f029edbffd71be60582127cbabe95881a86ac] <==
	I1102 14:16:44.660213       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1102 14:16:44.660257       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1102 14:16:44.660557       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1102 14:16:44.663914       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1102 14:16:44.677371       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1102 14:16:44.677401       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 14:16:44.677422       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1102 14:16:44.677724       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1102 14:16:44.677821       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-786183"
	I1102 14:16:44.677896       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1102 14:16:44.677441       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1102 14:16:44.689518       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 14:16:44.693025       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 14:16:44.693055       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1102 14:16:44.693062       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1102 14:16:44.701535       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1102 14:16:44.702945       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1102 14:16:44.702994       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1102 14:16:44.711347       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1102 14:16:44.711590       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1102 14:16:44.717054       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1102 14:16:44.717162       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1102 14:16:44.723047       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 14:16:44.723106       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1102 14:16:44.725264       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	
	
	==> kube-proxy [1bf6d913ed0ed0c57e76ab04a24da61ba22e4c320129f7d86e19d3853a33081b] <==
	I1102 14:16:41.654794       1 server_linux.go:53] "Using iptables proxy"
	I1102 14:16:42.147632       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1102 14:16:42.251447       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1102 14:16:42.251502       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1102 14:16:42.251645       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1102 14:16:42.295103       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 14:16:42.295169       1 server_linux.go:132] "Using iptables Proxier"
	I1102 14:16:42.300581       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1102 14:16:42.300981       1 server.go:527] "Version info" version="v1.34.1"
	I1102 14:16:42.301008       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 14:16:42.303733       1 config.go:200] "Starting service config controller"
	I1102 14:16:42.303755       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1102 14:16:42.303771       1 config.go:106] "Starting endpoint slice config controller"
	I1102 14:16:42.303776       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1102 14:16:42.303793       1 config.go:403] "Starting serviceCIDR config controller"
	I1102 14:16:42.303797       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1102 14:16:42.308637       1 config.go:309] "Starting node config controller"
	I1102 14:16:42.308661       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1102 14:16:42.308671       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1102 14:16:42.405073       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1102 14:16:42.405197       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1102 14:16:42.405280       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [312cee2bec817cdd2e35981ea4410dfbe7dc6c1e95635e12a5f8648c6f301ff1] <==
	I1102 14:16:35.020896       1 serving.go:386] Generated self-signed cert in-memory
	I1102 14:16:41.651025       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1102 14:16:41.651123       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 14:16:41.679478       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1102 14:16:41.679635       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1102 14:16:41.679666       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1102 14:16:41.679695       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1102 14:16:41.690733       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 14:16:41.702333       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 14:16:41.702443       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1102 14:16:41.702484       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1102 14:16:41.782820       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1102 14:16:41.803180       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1102 14:16:41.803314       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 02 14:16:45 default-k8s-diff-port-786183 kubelet[806]: I1102 14:16:45.457870     806 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a6ab03d5-ff18-456d-9305-69166308109a-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-b2vjd\" (UID: \"a6ab03d5-ff18-456d-9305-69166308109a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b2vjd"
	Nov 02 14:16:45 default-k8s-diff-port-786183 kubelet[806]: W1102 14:16:45.701249     806 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/cf96e33bc393b6a3d6d55863f66411b90a67bf689beea00fe018be1f7c4b996e/crio-3d9a6a625199ea9da1c3db2e16b3e784cae68ba75343dc97713b09693ebec246 WatchSource:0}: Error finding container 3d9a6a625199ea9da1c3db2e16b3e784cae68ba75343dc97713b09693ebec246: Status 404 returned error can't find the container with id 3d9a6a625199ea9da1c3db2e16b3e784cae68ba75343dc97713b09693ebec246
	Nov 02 14:16:45 default-k8s-diff-port-786183 kubelet[806]: W1102 14:16:45.716514     806 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/cf96e33bc393b6a3d6d55863f66411b90a67bf689beea00fe018be1f7c4b996e/crio-b10946822e6ab58f40b565d9fd8c93b069f7a1dcc7f072642bb6a98796cec970 WatchSource:0}: Error finding container b10946822e6ab58f40b565d9fd8c93b069f7a1dcc7f072642bb6a98796cec970: Status 404 returned error can't find the container with id b10946822e6ab58f40b565d9fd8c93b069f7a1dcc7f072642bb6a98796cec970
	Nov 02 14:16:46 default-k8s-diff-port-786183 kubelet[806]: I1102 14:16:46.364967     806 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 02 14:16:53 default-k8s-diff-port-786183 kubelet[806]: I1102 14:16:53.292258     806 scope.go:117] "RemoveContainer" containerID="916f65f99582f5883092fbea95c3b3bc3b048b8af991c308a9b12ed1f15c4430"
	Nov 02 14:16:54 default-k8s-diff-port-786183 kubelet[806]: I1102 14:16:54.303635     806 scope.go:117] "RemoveContainer" containerID="916f65f99582f5883092fbea95c3b3bc3b048b8af991c308a9b12ed1f15c4430"
	Nov 02 14:16:54 default-k8s-diff-port-786183 kubelet[806]: I1102 14:16:54.304456     806 scope.go:117] "RemoveContainer" containerID="3bbc09f99857f45d85ce0127faed5622ac9ff8b5acb27aae6e0560fa7be0b98a"
	Nov 02 14:16:54 default-k8s-diff-port-786183 kubelet[806]: E1102 14:16:54.304735     806 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5d5nz_kubernetes-dashboard(ff0353dc-dd91-431a-93ff-9b3e79e418c3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5d5nz" podUID="ff0353dc-dd91-431a-93ff-9b3e79e418c3"
	Nov 02 14:16:55 default-k8s-diff-port-786183 kubelet[806]: I1102 14:16:55.311038     806 scope.go:117] "RemoveContainer" containerID="3bbc09f99857f45d85ce0127faed5622ac9ff8b5acb27aae6e0560fa7be0b98a"
	Nov 02 14:16:55 default-k8s-diff-port-786183 kubelet[806]: E1102 14:16:55.311201     806 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5d5nz_kubernetes-dashboard(ff0353dc-dd91-431a-93ff-9b3e79e418c3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5d5nz" podUID="ff0353dc-dd91-431a-93ff-9b3e79e418c3"
	Nov 02 14:16:56 default-k8s-diff-port-786183 kubelet[806]: I1102 14:16:56.317950     806 scope.go:117] "RemoveContainer" containerID="3bbc09f99857f45d85ce0127faed5622ac9ff8b5acb27aae6e0560fa7be0b98a"
	Nov 02 14:16:56 default-k8s-diff-port-786183 kubelet[806]: E1102 14:16:56.318113     806 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5d5nz_kubernetes-dashboard(ff0353dc-dd91-431a-93ff-9b3e79e418c3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5d5nz" podUID="ff0353dc-dd91-431a-93ff-9b3e79e418c3"
	Nov 02 14:17:08 default-k8s-diff-port-786183 kubelet[806]: I1102 14:17:08.891729     806 scope.go:117] "RemoveContainer" containerID="3bbc09f99857f45d85ce0127faed5622ac9ff8b5acb27aae6e0560fa7be0b98a"
	Nov 02 14:17:09 default-k8s-diff-port-786183 kubelet[806]: I1102 14:17:09.354398     806 scope.go:117] "RemoveContainer" containerID="4a101d9dab2131f43fbd1503d716e8c8293362237381ea752445b7f6af83d5b9"
	Nov 02 14:17:09 default-k8s-diff-port-786183 kubelet[806]: E1102 14:17:09.354804     806 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5d5nz_kubernetes-dashboard(ff0353dc-dd91-431a-93ff-9b3e79e418c3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5d5nz" podUID="ff0353dc-dd91-431a-93ff-9b3e79e418c3"
	Nov 02 14:17:09 default-k8s-diff-port-786183 kubelet[806]: I1102 14:17:09.355418     806 scope.go:117] "RemoveContainer" containerID="3bbc09f99857f45d85ce0127faed5622ac9ff8b5acb27aae6e0560fa7be0b98a"
	Nov 02 14:17:09 default-k8s-diff-port-786183 kubelet[806]: I1102 14:17:09.386447     806 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-b2vjd" podStartSLOduration=11.654051759 podStartE2EDuration="24.386429732s" podCreationTimestamp="2025-11-02 14:16:45 +0000 UTC" firstStartedPulling="2025-11-02 14:16:45.720909068 +0000 UTC m=+15.979746182" lastFinishedPulling="2025-11-02 14:16:58.453287042 +0000 UTC m=+28.712124155" observedRunningTime="2025-11-02 14:16:59.340591638 +0000 UTC m=+29.599428760" watchObservedRunningTime="2025-11-02 14:17:09.386429732 +0000 UTC m=+39.645266854"
	Nov 02 14:17:11 default-k8s-diff-port-786183 kubelet[806]: I1102 14:17:11.362660     806 scope.go:117] "RemoveContainer" containerID="9fd9ef276f481fcadafbda761a8afadc807a2f1415cbc32fad53c6cf98b7595e"
	Nov 02 14:17:15 default-k8s-diff-port-786183 kubelet[806]: I1102 14:17:15.649710     806 scope.go:117] "RemoveContainer" containerID="4a101d9dab2131f43fbd1503d716e8c8293362237381ea752445b7f6af83d5b9"
	Nov 02 14:17:15 default-k8s-diff-port-786183 kubelet[806]: E1102 14:17:15.650344     806 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5d5nz_kubernetes-dashboard(ff0353dc-dd91-431a-93ff-9b3e79e418c3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5d5nz" podUID="ff0353dc-dd91-431a-93ff-9b3e79e418c3"
	Nov 02 14:17:26 default-k8s-diff-port-786183 kubelet[806]: I1102 14:17:26.891990     806 scope.go:117] "RemoveContainer" containerID="4a101d9dab2131f43fbd1503d716e8c8293362237381ea752445b7f6af83d5b9"
	Nov 02 14:17:26 default-k8s-diff-port-786183 kubelet[806]: E1102 14:17:26.892616     806 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-5d5nz_kubernetes-dashboard(ff0353dc-dd91-431a-93ff-9b3e79e418c3)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-5d5nz" podUID="ff0353dc-dd91-431a-93ff-9b3e79e418c3"
	Nov 02 14:17:30 default-k8s-diff-port-786183 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 02 14:17:30 default-k8s-diff-port-786183 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 02 14:17:30 default-k8s-diff-port-786183 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	
	
	==> kubernetes-dashboard [83f8cc1376a117417d77c447e6abdf61ce016b69c382db2babe916d2eb2ab76b] <==
	2025/11/02 14:16:58 Using namespace: kubernetes-dashboard
	2025/11/02 14:16:58 Using in-cluster config to connect to apiserver
	2025/11/02 14:16:58 Using secret token for csrf signing
	2025/11/02 14:16:58 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/02 14:16:58 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/02 14:16:58 Successful initial request to the apiserver, version: v1.34.1
	2025/11/02 14:16:58 Generating JWE encryption key
	2025/11/02 14:16:58 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/02 14:16:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/02 14:16:58 Initializing JWE encryption key from synchronized object
	2025/11/02 14:16:58 Creating in-cluster Sidecar client
	2025/11/02 14:16:58 Serving insecurely on HTTP port: 9090
	2025/11/02 14:16:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/02 14:17:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/02 14:16:58 Starting overwatch
	
	
	==> storage-provisioner [9fd9ef276f481fcadafbda761a8afadc807a2f1415cbc32fad53c6cf98b7595e] <==
	I1102 14:16:40.993901       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1102 14:17:10.996107       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ca0fa38afc419a430fa931e4b8bb43eec21c0571611ee490561556d416005aec] <==
	I1102 14:17:11.466789       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1102 14:17:11.487505       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1102 14:17:11.487560       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1102 14:17:11.492740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:17:14.947681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:17:19.207567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:17:22.806004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:17:25.864230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:17:28.886323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:17:28.892192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1102 14:17:28.892405       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1102 14:17:28.892592       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-786183_aedb59e0-9cb9-4033-8337-c662da023eb4!
	I1102 14:17:28.899226       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f08a39b4-71c9-422d-9d61-86036126fe6f", APIVersion:"v1", ResourceVersion:"656", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-786183_aedb59e0-9cb9-4033-8337-c662da023eb4 became leader
	W1102 14:17:28.899423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:17:28.910782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1102 14:17:28.995384       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-786183_aedb59e0-9cb9-4033-8337-c662da023eb4!
	W1102 14:17:30.913658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:17:30.918412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:17:32.922339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:17:32.927234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:17:34.930888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1102 14:17:34.942609       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-786183 -n default-k8s-diff-port-786183
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-786183 -n default-k8s-diff-port-786183: exit status 2 (443.150432ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-786183 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.89s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (7.67s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-352233 --alsologtostderr -v=1
E1102 14:17:50.129413  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/functional-082350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p newest-cni-352233 --alsologtostderr -v=1: exit status 80 (2.372526732s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-352233 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 14:17:49.664515  507415 out.go:360] Setting OutFile to fd 1 ...
	I1102 14:17:49.664736  507415 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:17:49.664760  507415 out.go:374] Setting ErrFile to fd 2...
	I1102 14:17:49.664779  507415 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:17:49.665064  507415 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 14:17:49.665489  507415 out.go:368] Setting JSON to false
	I1102 14:17:49.665541  507415 mustload.go:66] Loading cluster: newest-cni-352233
	I1102 14:17:49.665955  507415 config.go:182] Loaded profile config "newest-cni-352233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:17:49.666470  507415 cli_runner.go:164] Run: docker container inspect newest-cni-352233 --format={{.State.Status}}
	I1102 14:17:49.683916  507415 host.go:66] Checking if "newest-cni-352233" exists ...
	I1102 14:17:49.684219  507415 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:17:49.800105  507415 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:63 SystemTime:2025-11-02 14:17:49.790934237 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:17:49.800801  507415 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1761658712-21800/minikube-v1.37.0-1761658712-21800-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1761658712-21800-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-352233 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1102 14:17:49.805016  507415 out.go:179] * Pausing node newest-cni-352233 ... 
	I1102 14:17:49.808904  507415 host.go:66] Checking if "newest-cni-352233" exists ...
	I1102 14:17:49.809225  507415 ssh_runner.go:195] Run: systemctl --version
	I1102 14:17:49.809266  507415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-352233
	I1102 14:17:49.870828  507415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33466 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/newest-cni-352233/id_rsa Username:docker}
	I1102 14:17:49.982436  507415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 14:17:49.995844  507415 pause.go:52] kubelet running: true
	I1102 14:17:49.995907  507415 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1102 14:17:50.529976  507415 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1102 14:17:50.530065  507415 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1102 14:17:50.729716  507415 cri.go:89] found id: "b09745ad56a666ec68ba8c088ae630cdadc73e5c7ee08f53db30186570e7ab63"
	I1102 14:17:50.729743  507415 cri.go:89] found id: "0758ad546409c66b98af0fa14dc53e3893a7ce5029245e7776e9ae8c51ed9fde"
	I1102 14:17:50.729754  507415 cri.go:89] found id: "5f1604a94619feed1ff5c1e5544c063d94fd2c34695705ec31d387e5efb1050c"
	I1102 14:17:50.729759  507415 cri.go:89] found id: "e7f0e8be97d4590df69fdb907f0a84b9f17b797f501da038a7715f96824ef2cb"
	I1102 14:17:50.729762  507415 cri.go:89] found id: "0604c0db8643e3b16f14589b7b9f60583c1a4a8b0e341130d1a39881dde8b5f7"
	I1102 14:17:50.729767  507415 cri.go:89] found id: "63723b5d8f835727bc7bf49c43e0299d139270a34c176299be3ee3d496669f3e"
	I1102 14:17:50.729770  507415 cri.go:89] found id: ""
	I1102 14:17:50.729817  507415 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 14:17:50.743753  507415 retry.go:31] will retry after 281.931505ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T14:17:50Z" level=error msg="open /run/runc: no such file or directory"
	I1102 14:17:51.026796  507415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 14:17:51.045236  507415 pause.go:52] kubelet running: false
	I1102 14:17:51.045295  507415 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1102 14:17:51.234969  507415 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1102 14:17:51.235035  507415 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1102 14:17:51.316814  507415 cri.go:89] found id: "b09745ad56a666ec68ba8c088ae630cdadc73e5c7ee08f53db30186570e7ab63"
	I1102 14:17:51.316833  507415 cri.go:89] found id: "0758ad546409c66b98af0fa14dc53e3893a7ce5029245e7776e9ae8c51ed9fde"
	I1102 14:17:51.316839  507415 cri.go:89] found id: "5f1604a94619feed1ff5c1e5544c063d94fd2c34695705ec31d387e5efb1050c"
	I1102 14:17:51.316843  507415 cri.go:89] found id: "e7f0e8be97d4590df69fdb907f0a84b9f17b797f501da038a7715f96824ef2cb"
	I1102 14:17:51.316846  507415 cri.go:89] found id: "0604c0db8643e3b16f14589b7b9f60583c1a4a8b0e341130d1a39881dde8b5f7"
	I1102 14:17:51.316849  507415 cri.go:89] found id: "63723b5d8f835727bc7bf49c43e0299d139270a34c176299be3ee3d496669f3e"
	I1102 14:17:51.316852  507415 cri.go:89] found id: ""
	I1102 14:17:51.316901  507415 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 14:17:51.327830  507415 retry.go:31] will retry after 278.229365ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T14:17:51Z" level=error msg="open /run/runc: no such file or directory"
	I1102 14:17:51.607157  507415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 14:17:51.629109  507415 pause.go:52] kubelet running: false
	I1102 14:17:51.629179  507415 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1102 14:17:51.815094  507415 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1102 14:17:51.815174  507415 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1102 14:17:51.914252  507415 cri.go:89] found id: "b09745ad56a666ec68ba8c088ae630cdadc73e5c7ee08f53db30186570e7ab63"
	I1102 14:17:51.914271  507415 cri.go:89] found id: "0758ad546409c66b98af0fa14dc53e3893a7ce5029245e7776e9ae8c51ed9fde"
	I1102 14:17:51.914275  507415 cri.go:89] found id: "5f1604a94619feed1ff5c1e5544c063d94fd2c34695705ec31d387e5efb1050c"
	I1102 14:17:51.914279  507415 cri.go:89] found id: "e7f0e8be97d4590df69fdb907f0a84b9f17b797f501da038a7715f96824ef2cb"
	I1102 14:17:51.914283  507415 cri.go:89] found id: "0604c0db8643e3b16f14589b7b9f60583c1a4a8b0e341130d1a39881dde8b5f7"
	I1102 14:17:51.914286  507415 cri.go:89] found id: "63723b5d8f835727bc7bf49c43e0299d139270a34c176299be3ee3d496669f3e"
	I1102 14:17:51.914310  507415 cri.go:89] found id: ""
	I1102 14:17:51.914355  507415 ssh_runner.go:195] Run: sudo runc list -f json
	I1102 14:17:51.938415  507415 out.go:203] 
	W1102 14:17:51.941737  507415 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T14:17:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T14:17:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1102 14:17:51.941762  507415 out.go:285] * 
	* 
	W1102 14:17:51.949002  507415 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1102 14:17:51.951848  507415 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-arm64 pause -p newest-cni-352233 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-352233
helpers_test.go:243: (dbg) docker inspect newest-cni-352233:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3dedeeb54f374fa01ce72bc2b41c87fc231415be9be044642fbcda8c122ba0ff",
	        "Created": "2025-11-02T14:16:47.051560266Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 503347,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-02T14:17:29.922131765Z",
	            "FinishedAt": "2025-11-02T14:17:28.709868678Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/3dedeeb54f374fa01ce72bc2b41c87fc231415be9be044642fbcda8c122ba0ff/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3dedeeb54f374fa01ce72bc2b41c87fc231415be9be044642fbcda8c122ba0ff/hostname",
	        "HostsPath": "/var/lib/docker/containers/3dedeeb54f374fa01ce72bc2b41c87fc231415be9be044642fbcda8c122ba0ff/hosts",
	        "LogPath": "/var/lib/docker/containers/3dedeeb54f374fa01ce72bc2b41c87fc231415be9be044642fbcda8c122ba0ff/3dedeeb54f374fa01ce72bc2b41c87fc231415be9be044642fbcda8c122ba0ff-json.log",
	        "Name": "/newest-cni-352233",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-352233:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-352233",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3dedeeb54f374fa01ce72bc2b41c87fc231415be9be044642fbcda8c122ba0ff",
	                "LowerDir": "/var/lib/docker/overlay2/6a465501cbe8e86cbcc859ba8574cb7d3d77365eeff8339b92edb281a1936040-init/diff:/var/lib/docker/overlay2/53058afe6acd7391f639f551e07f446f6a39a991b699aea18507cb21a1652e97/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6a465501cbe8e86cbcc859ba8574cb7d3d77365eeff8339b92edb281a1936040/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6a465501cbe8e86cbcc859ba8574cb7d3d77365eeff8339b92edb281a1936040/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6a465501cbe8e86cbcc859ba8574cb7d3d77365eeff8339b92edb281a1936040/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-352233",
	                "Source": "/var/lib/docker/volumes/newest-cni-352233/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-352233",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-352233",
	                "name.minikube.sigs.k8s.io": "newest-cni-352233",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "80eb9b71012d0b0c0a13f8c2a0b8e50f6b749c949da6d788503577c54c271f30",
	            "SandboxKey": "/var/run/docker/netns/80eb9b71012d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33466"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33467"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33470"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33468"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33469"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-352233": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:89:bf:93:c0:28",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f61df99b10f05d6b77aff7bd79b1aba98b765bd7b0b260056e12ed71f894662d",
	                    "EndpointID": "05eaeae73997db87455a7df74df6f5a4f70fdf7ce88475cb2913cdd41c761c2c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-352233",
	                        "3dedeeb54f37"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-352233 -n newest-cni-352233
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-352233 -n newest-cni-352233: exit status 2 (473.081905ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-352233 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-352233 logs -n 25: (1.351458677s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p default-k8s-diff-port-786183 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │ 02 Nov 25 14:15 UTC │
	│ addons  │ enable metrics-server -p embed-certs-955646 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:15 UTC │                     │
	│ stop    │ -p embed-certs-955646 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:15 UTC │ 02 Nov 25 14:15 UTC │
	│ addons  │ enable dashboard -p embed-certs-955646 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:15 UTC │ 02 Nov 25 14:15 UTC │
	│ start   │ -p embed-certs-955646 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:15 UTC │ 02 Nov 25 14:16 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-786183 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-786183 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │ 02 Nov 25 14:16 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-786183 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │ 02 Nov 25 14:16 UTC │
	│ start   │ -p default-k8s-diff-port-786183 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │ 02 Nov 25 14:17 UTC │
	│ image   │ embed-certs-955646 image list --format=json                                                                                                                                                                                                   │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │ 02 Nov 25 14:16 UTC │
	│ pause   │ -p embed-certs-955646 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │                     │
	│ delete  │ -p embed-certs-955646                                                                                                                                                                                                                         │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │ 02 Nov 25 14:16 UTC │
	│ delete  │ -p embed-certs-955646                                                                                                                                                                                                                         │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │ 02 Nov 25 14:16 UTC │
	│ start   │ -p newest-cni-352233 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-352233            │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │ 02 Nov 25 14:17 UTC │
	│ addons  │ enable metrics-server -p newest-cni-352233 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-352233            │ jenkins │ v1.37.0 │ 02 Nov 25 14:17 UTC │                     │
	│ stop    │ -p newest-cni-352233 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-352233            │ jenkins │ v1.37.0 │ 02 Nov 25 14:17 UTC │ 02 Nov 25 14:17 UTC │
	│ addons  │ enable dashboard -p newest-cni-352233 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-352233            │ jenkins │ v1.37.0 │ 02 Nov 25 14:17 UTC │ 02 Nov 25 14:17 UTC │
	│ start   │ -p newest-cni-352233 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-352233            │ jenkins │ v1.37.0 │ 02 Nov 25 14:17 UTC │ 02 Nov 25 14:17 UTC │
	│ image   │ default-k8s-diff-port-786183 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:17 UTC │ 02 Nov 25 14:17 UTC │
	│ pause   │ -p default-k8s-diff-port-786183 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:17 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-786183                                                                                                                                                                                                               │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:17 UTC │ 02 Nov 25 14:17 UTC │
	│ delete  │ -p default-k8s-diff-port-786183                                                                                                                                                                                                               │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:17 UTC │ 02 Nov 25 14:17 UTC │
	│ start   │ -p auto-143736 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-143736                  │ jenkins │ v1.37.0 │ 02 Nov 25 14:17 UTC │                     │
	│ image   │ newest-cni-352233 image list --format=json                                                                                                                                                                                                    │ newest-cni-352233            │ jenkins │ v1.37.0 │ 02 Nov 25 14:17 UTC │ 02 Nov 25 14:17 UTC │
	│ pause   │ -p newest-cni-352233 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-352233            │ jenkins │ v1.37.0 │ 02 Nov 25 14:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 14:17:40
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 14:17:40.715191  506023 out.go:360] Setting OutFile to fd 1 ...
	I1102 14:17:40.715311  506023 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:17:40.715317  506023 out.go:374] Setting ErrFile to fd 2...
	I1102 14:17:40.715322  506023 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:17:40.715662  506023 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 14:17:40.716133  506023 out.go:368] Setting JSON to false
	I1102 14:17:40.717065  506023 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10813,"bootTime":1762082248,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1102 14:17:40.717148  506023 start.go:143] virtualization:  
	I1102 14:17:40.721180  506023 out.go:179] * [auto-143736] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1102 14:17:40.725823  506023 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 14:17:40.725866  506023 notify.go:221] Checking for updates...
	I1102 14:17:40.732939  506023 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 14:17:40.736044  506023 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-293314/kubeconfig
	I1102 14:17:40.739304  506023 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-293314/.minikube
	I1102 14:17:40.742394  506023 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1102 14:17:40.745393  506023 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 14:17:40.748929  506023 config.go:182] Loaded profile config "newest-cni-352233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:17:40.749036  506023 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 14:17:40.802710  506023 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1102 14:17:40.802852  506023 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:17:40.912373  506023 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-02 14:17:40.902468217 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:17:40.912503  506023 docker.go:319] overlay module found
	I1102 14:17:40.915737  506023 out.go:179] * Using the docker driver based on user configuration
	I1102 14:17:40.918721  506023 start.go:309] selected driver: docker
	I1102 14:17:40.918742  506023 start.go:930] validating driver "docker" against <nil>
	I1102 14:17:40.918758  506023 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 14:17:40.919495  506023 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:17:41.014705  506023 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-02 14:17:41.003601505 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:17:41.014877  506023 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1102 14:17:41.015123  506023 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 14:17:41.018211  506023 out.go:179] * Using Docker driver with root privileges
	I1102 14:17:41.021916  506023 cni.go:84] Creating CNI manager for ""
	I1102 14:17:41.021996  506023 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 14:17:41.022012  506023 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1102 14:17:41.022094  506023 start.go:353] cluster config:
	{Name:auto-143736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-143736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1102 14:17:41.025774  506023 out.go:179] * Starting "auto-143736" primary control-plane node in "auto-143736" cluster
	I1102 14:17:41.028568  506023 cache.go:124] Beginning downloading kic base image for docker with crio
	I1102 14:17:41.031437  506023 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1102 14:17:41.034191  506023 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 14:17:41.034247  506023 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1102 14:17:41.034260  506023 cache.go:59] Caching tarball of preloaded images
	I1102 14:17:41.034389  506023 preload.go:233] Found /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1102 14:17:41.034405  506023 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1102 14:17:41.034512  506023 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/auto-143736/config.json ...
	I1102 14:17:41.034536  506023 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/auto-143736/config.json: {Name:mkeb49beb903906fd811b6180bfceaf5d3d55462 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:17:41.034723  506023 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1102 14:17:41.069566  506023 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1102 14:17:41.069596  506023 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1102 14:17:41.069610  506023 cache.go:233] Successfully downloaded all kic artifacts
	I1102 14:17:41.069640  506023 start.go:360] acquireMachinesLock for auto-143736: {Name:mk583c10954ec76136345a56d1c6b54d3bd52999 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 14:17:41.069752  506023 start.go:364] duration metric: took 91.464µs to acquireMachinesLock for "auto-143736"
	I1102 14:17:41.069784  506023 start.go:93] Provisioning new machine with config: &{Name:auto-143736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-143736 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 14:17:41.069860  506023 start.go:125] createHost starting for "" (driver="docker")
	I1102 14:17:39.590173  503184 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1102 14:17:39.590251  503184 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1102 14:17:39.701034  503184 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1102 14:17:39.701110  503184 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1102 14:17:39.778901  503184 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1102 14:17:39.778981  503184 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1102 14:17:39.810885  503184 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1102 14:17:39.810962  503184 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1102 14:17:39.876962  503184 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1102 14:17:39.877037  503184 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1102 14:17:39.947581  503184 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 14:17:39.947662  503184 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1102 14:17:40.006285  503184 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 14:17:41.073214  506023 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1102 14:17:41.073451  506023 start.go:159] libmachine.API.Create for "auto-143736" (driver="docker")
	I1102 14:17:41.073498  506023 client.go:173] LocalClient.Create starting
	I1102 14:17:41.073591  506023 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem
	I1102 14:17:41.073631  506023 main.go:143] libmachine: Decoding PEM data...
	I1102 14:17:41.073649  506023 main.go:143] libmachine: Parsing certificate...
	I1102 14:17:41.073705  506023 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem
	I1102 14:17:41.073732  506023 main.go:143] libmachine: Decoding PEM data...
	I1102 14:17:41.073747  506023 main.go:143] libmachine: Parsing certificate...
	I1102 14:17:41.074112  506023 cli_runner.go:164] Run: docker network inspect auto-143736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1102 14:17:41.106823  506023 cli_runner.go:211] docker network inspect auto-143736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1102 14:17:41.106901  506023 network_create.go:284] running [docker network inspect auto-143736] to gather additional debugging logs...
	I1102 14:17:41.106927  506023 cli_runner.go:164] Run: docker network inspect auto-143736
	W1102 14:17:41.144892  506023 cli_runner.go:211] docker network inspect auto-143736 returned with exit code 1
	I1102 14:17:41.144922  506023 network_create.go:287] error running [docker network inspect auto-143736]: docker network inspect auto-143736: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-143736 not found
	I1102 14:17:41.144953  506023 network_create.go:289] output of [docker network inspect auto-143736]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-143736 not found
	
	** /stderr **
	I1102 14:17:41.145056  506023 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 14:17:41.181265  506023 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ddf319108ac9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:66:f7:2d:49:67:ff} reservation:<nil>}
	I1102 14:17:41.181640  506023 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-30b945568040 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:b6:b2:b0:cb:49:d7} reservation:<nil>}
	I1102 14:17:41.181893  506023 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d23a3a2e266d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fa:42:95:8e:ae:52} reservation:<nil>}
	I1102 14:17:41.182291  506023 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400197f3b0}
	I1102 14:17:41.182308  506023 network_create.go:124] attempt to create docker network auto-143736 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1102 14:17:41.182384  506023 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-143736 auto-143736
	I1102 14:17:41.289462  506023 network_create.go:108] docker network auto-143736 192.168.76.0/24 created
	I1102 14:17:41.289492  506023 kic.go:121] calculated static IP "192.168.76.2" for the "auto-143736" container
	I1102 14:17:41.289573  506023 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1102 14:17:41.313312  506023 cli_runner.go:164] Run: docker volume create auto-143736 --label name.minikube.sigs.k8s.io=auto-143736 --label created_by.minikube.sigs.k8s.io=true
	I1102 14:17:41.340414  506023 oci.go:103] Successfully created a docker volume auto-143736
	I1102 14:17:41.340504  506023 cli_runner.go:164] Run: docker run --rm --name auto-143736-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-143736 --entrypoint /usr/bin/test -v auto-143736:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1102 14:17:42.042349  506023 oci.go:107] Successfully prepared a docker volume auto-143736
	I1102 14:17:42.042392  506023 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 14:17:42.042412  506023 kic.go:194] Starting extracting preloaded images to volume ...
	I1102 14:17:42.042475  506023 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-143736:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1102 14:17:47.347229  503184 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.094207751s)
	I1102 14:17:47.347293  503184 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.068380604s)
	I1102 14:17:47.347331  503184 api_server.go:52] waiting for apiserver process to appear ...
	I1102 14:17:47.347391  503184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 14:17:47.347475  503184 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.997728527s)
	I1102 14:17:47.572189  503184 api_server.go:72] duration metric: took 8.878504905s to wait for apiserver process to appear ...
	I1102 14:17:47.572213  503184 api_server.go:88] waiting for apiserver healthz status ...
	I1102 14:17:47.572232  503184 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1102 14:17:47.573114  503184 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.566724314s)
	I1102 14:17:47.576225  503184 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-352233 addons enable metrics-server
	
	I1102 14:17:47.580809  503184 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1102 14:17:47.583890  503184 addons.go:515] duration metric: took 8.889838967s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1102 14:17:47.586112  503184 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 14:17:47.586135  503184 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1102 14:17:48.072353  503184 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1102 14:17:48.102885  503184 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1102 14:17:48.106404  503184 api_server.go:141] control plane version: v1.34.1
	I1102 14:17:48.106436  503184 api_server.go:131] duration metric: took 534.215796ms to wait for apiserver health ...
	I1102 14:17:48.106446  503184 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 14:17:48.122410  503184 system_pods.go:59] 8 kube-system pods found
	I1102 14:17:48.122449  503184 system_pods.go:61] "coredns-66bc5c9577-g4hfq" [249838ab-df11-4a0c-a2ef-a1b05a0e2660] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1102 14:17:48.122466  503184 system_pods.go:61] "etcd-newest-cni-352233" [d797a2fa-d3f0-4180-88c1-5417b262b322] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 14:17:48.122472  503184 system_pods.go:61] "kindnet-g4hrl" [380d63bc-7a9c-4abb-9747-04c37075e8b0] Running
	I1102 14:17:48.122481  503184 system_pods.go:61] "kube-apiserver-newest-cni-352233" [ea69c744-43cb-4464-9da8-0768bd8820b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 14:17:48.122487  503184 system_pods.go:61] "kube-controller-manager-newest-cni-352233" [5e48eb79-6be2-4f01-99bc-be7c2f15d45a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 14:17:48.122492  503184 system_pods.go:61] "kube-proxy-vbc2x" [2cec75f2-36fd-49c7-8644-941b68023b1b] Running
	I1102 14:17:48.122499  503184 system_pods.go:61] "kube-scheduler-newest-cni-352233" [33bacc9d-8a05-403d-865f-ba237a6aa780] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 14:17:48.122505  503184 system_pods.go:61] "storage-provisioner" [c94e5e11-33b4-4d32-9bbc-fa8e510911a5] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1102 14:17:48.122511  503184 system_pods.go:74] duration metric: took 16.05902ms to wait for pod list to return data ...
	I1102 14:17:48.122521  503184 default_sa.go:34] waiting for default service account to be created ...
	I1102 14:17:48.130337  503184 default_sa.go:45] found service account: "default"
	I1102 14:17:48.130363  503184 default_sa.go:55] duration metric: took 7.836417ms for default service account to be created ...
	I1102 14:17:48.130378  503184 kubeadm.go:587] duration metric: took 9.436696096s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1102 14:17:48.130394  503184 node_conditions.go:102] verifying NodePressure condition ...
	I1102 14:17:48.138807  503184 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1102 14:17:48.138889  503184 node_conditions.go:123] node cpu capacity is 2
	I1102 14:17:48.138919  503184 node_conditions.go:105] duration metric: took 8.519279ms to run NodePressure ...
	I1102 14:17:48.138960  503184 start.go:242] waiting for startup goroutines ...
	I1102 14:17:48.138986  503184 start.go:247] waiting for cluster config update ...
	I1102 14:17:48.139012  503184 start.go:256] writing updated cluster config ...
	I1102 14:17:48.139400  503184 ssh_runner.go:195] Run: rm -f paused
	I1102 14:17:48.332906  503184 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1102 14:17:48.336765  503184 out.go:179] * Done! kubectl is now configured to use "newest-cni-352233" cluster and "default" namespace by default
	I1102 14:17:46.972207  506023 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-143736:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.929697046s)
	I1102 14:17:46.972234  506023 kic.go:203] duration metric: took 4.929819402s to extract preloaded images to volume ...
	W1102 14:17:46.972369  506023 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1102 14:17:46.972468  506023 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1102 14:17:47.097688  506023 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-143736 --name auto-143736 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-143736 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-143736 --network auto-143736 --ip 192.168.76.2 --volume auto-143736:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1102 14:17:47.508400  506023 cli_runner.go:164] Run: docker container inspect auto-143736 --format={{.State.Running}}
	I1102 14:17:47.530580  506023 cli_runner.go:164] Run: docker container inspect auto-143736 --format={{.State.Status}}
	I1102 14:17:47.554935  506023 cli_runner.go:164] Run: docker exec auto-143736 stat /var/lib/dpkg/alternatives/iptables
	I1102 14:17:47.627058  506023 oci.go:144] the created container "auto-143736" has a running status.
	I1102 14:17:47.627092  506023 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21808-293314/.minikube/machines/auto-143736/id_rsa...
	I1102 14:17:48.770100  506023 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21808-293314/.minikube/machines/auto-143736/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1102 14:17:48.796053  506023 cli_runner.go:164] Run: docker container inspect auto-143736 --format={{.State.Status}}
	I1102 14:17:48.826230  506023 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1102 14:17:48.826248  506023 kic_runner.go:114] Args: [docker exec --privileged auto-143736 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1102 14:17:48.890520  506023 cli_runner.go:164] Run: docker container inspect auto-143736 --format={{.State.Status}}
	I1102 14:17:48.914597  506023 machine.go:94] provisionDockerMachine start ...
	I1102 14:17:48.914710  506023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-143736
	I1102 14:17:48.949155  506023 main.go:143] libmachine: Using SSH client type: native
	I1102 14:17:48.949482  506023 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33471 <nil> <nil>}
	I1102 14:17:48.949491  506023 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 14:17:49.186277  506023 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-143736
	
	I1102 14:17:49.186319  506023 ubuntu.go:182] provisioning hostname "auto-143736"
	I1102 14:17:49.186386  506023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-143736
	I1102 14:17:49.217226  506023 main.go:143] libmachine: Using SSH client type: native
	I1102 14:17:49.217543  506023 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33471 <nil> <nil>}
	I1102 14:17:49.217647  506023 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-143736 && echo "auto-143736" | sudo tee /etc/hostname
	I1102 14:17:49.489508  506023 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-143736
	
	I1102 14:17:49.489584  506023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-143736
	I1102 14:17:49.512627  506023 main.go:143] libmachine: Using SSH client type: native
	I1102 14:17:49.512932  506023 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33471 <nil> <nil>}
	I1102 14:17:49.512949  506023 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-143736' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-143736/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-143736' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 14:17:49.724092  506023 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 14:17:49.724120  506023 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-293314/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-293314/.minikube}
	I1102 14:17:49.724146  506023 ubuntu.go:190] setting up certificates
	I1102 14:17:49.724157  506023 provision.go:84] configureAuth start
	I1102 14:17:49.724215  506023 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-143736
	I1102 14:17:49.758899  506023 provision.go:143] copyHostCerts
	I1102 14:17:49.758970  506023 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem, removing ...
	I1102 14:17:49.758984  506023 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem
	I1102 14:17:49.759689  506023 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem (1082 bytes)
	I1102 14:17:49.759807  506023 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem, removing ...
	I1102 14:17:49.761102  506023 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem
	I1102 14:17:49.761189  506023 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem (1123 bytes)
	I1102 14:17:49.761292  506023 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem, removing ...
	I1102 14:17:49.761304  506023 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem
	I1102 14:17:49.761333  506023 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem (1675 bytes)
	I1102 14:17:49.761399  506023 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem org=jenkins.auto-143736 san=[127.0.0.1 192.168.76.2 auto-143736 localhost minikube]
	I1102 14:17:50.391506  506023 provision.go:177] copyRemoteCerts
	I1102 14:17:50.391619  506023 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 14:17:50.391679  506023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-143736
	I1102 14:17:50.409443  506023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/auto-143736/id_rsa Username:docker}
	I1102 14:17:50.524355  506023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1102 14:17:50.557330  506023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1102 14:17:50.595200  506023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1102 14:17:50.629445  506023 provision.go:87] duration metric: took 905.263416ms to configureAuth
	I1102 14:17:50.629472  506023 ubuntu.go:206] setting minikube options for container-runtime
	I1102 14:17:50.629668  506023 config.go:182] Loaded profile config "auto-143736": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:17:50.629784  506023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-143736
	I1102 14:17:50.661847  506023 main.go:143] libmachine: Using SSH client type: native
	I1102 14:17:50.662180  506023 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33471 <nil> <nil>}
	I1102 14:17:50.662195  506023 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	
	
	==> CRI-O <==
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.429486978Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.474892502Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=75228de2-9e36-460a-9f61-8a6f7ae31f64 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.484717625Z" level=info msg="Running pod sandbox: kube-system/kindnet-g4hrl/POD" id=b33b4c09-6e32-4a49-8041-2b7955c45584 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.484795041Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.497188281Z" level=info msg="Ran pod sandbox e1fa8f5fb4b1cea176ee321c281d52ebda8ad208bb3aa0d83fd08aadfcbaa32b with infra container: kube-system/kube-proxy-vbc2x/POD" id=75228de2-9e36-460a-9f61-8a6f7ae31f64 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.498497522Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=58c4fc7a-ea5a-423b-8e67-07b424c446e2 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.520359245Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=a2029648-cf2c-4ba7-a5cd-e708850537d0 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.531099406Z" level=info msg="Creating container: kube-system/kube-proxy-vbc2x/kube-proxy" id=b0a570c5-7b9f-40fc-abf6-5db04548a961 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.531224929Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.532924951Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=b33b4c09-6e32-4a49-8041-2b7955c45584 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.567074224Z" level=info msg="Ran pod sandbox 3f7e331cfbe6b091e6692d5baee4cc035411be27915fee94754d121b45f88a60 with infra container: kube-system/kindnet-g4hrl/POD" id=b33b4c09-6e32-4a49-8041-2b7955c45584 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.569711577Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=e73c8038-0025-40f1-96e2-22e4feab9442 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.579258921Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=95cc3912-9922-4b84-9f44-6c1cebd4b117 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.581332682Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.584530542Z" level=info msg="Creating container: kube-system/kindnet-g4hrl/kindnet-cni" id=95acf843-fc55-4078-ba77-8b4affe4c510 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.589048539Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.592619015Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.638723648Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.639213778Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.659857667Z" level=info msg="Created container 0758ad546409c66b98af0fa14dc53e3893a7ce5029245e7776e9ae8c51ed9fde: kube-system/kube-proxy-vbc2x/kube-proxy" id=b0a570c5-7b9f-40fc-abf6-5db04548a961 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.660661959Z" level=info msg="Starting container: 0758ad546409c66b98af0fa14dc53e3893a7ce5029245e7776e9ae8c51ed9fde" id=4ab161b7-8b57-4080-8b9e-a19a0767cdb7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.671540139Z" level=info msg="Started container" PID=1090 containerID=0758ad546409c66b98af0fa14dc53e3893a7ce5029245e7776e9ae8c51ed9fde description=kube-system/kube-proxy-vbc2x/kube-proxy id=4ab161b7-8b57-4080-8b9e-a19a0767cdb7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e1fa8f5fb4b1cea176ee321c281d52ebda8ad208bb3aa0d83fd08aadfcbaa32b
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.716070438Z" level=info msg="Created container b09745ad56a666ec68ba8c088ae630cdadc73e5c7ee08f53db30186570e7ab63: kube-system/kindnet-g4hrl/kindnet-cni" id=95acf843-fc55-4078-ba77-8b4affe4c510 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.71686787Z" level=info msg="Starting container: b09745ad56a666ec68ba8c088ae630cdadc73e5c7ee08f53db30186570e7ab63" id=cd09921d-85b2-4d34-b5c8-9322a01e995a name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.718756891Z" level=info msg="Started container" PID=1095 containerID=b09745ad56a666ec68ba8c088ae630cdadc73e5c7ee08f53db30186570e7ab63 description=kube-system/kindnet-g4hrl/kindnet-cni id=cd09921d-85b2-4d34-b5c8-9322a01e995a name=/runtime.v1.RuntimeService/StartContainer sandboxID=3f7e331cfbe6b091e6692d5baee4cc035411be27915fee94754d121b45f88a60
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	b09745ad56a66       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   6 seconds ago       Running             kindnet-cni               1                   3f7e331cfbe6b       kindnet-g4hrl                               kube-system
	0758ad546409c       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   6 seconds ago       Running             kube-proxy                1                   e1fa8f5fb4b1c       kube-proxy-vbc2x                            kube-system
	5f1604a94619f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   14 seconds ago      Running             kube-controller-manager   1                   b23a1e074749f       kube-controller-manager-newest-cni-352233   kube-system
	e7f0e8be97d45       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   14 seconds ago      Running             kube-scheduler            1                   68492703c8525       kube-scheduler-newest-cni-352233            kube-system
	0604c0db8643e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   14 seconds ago      Running             kube-apiserver            1                   6ad5a2daa9c70       kube-apiserver-newest-cni-352233            kube-system
	63723b5d8f835       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   14 seconds ago      Running             etcd                      1                   a3196d8dfa138       etcd-newest-cni-352233                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-352233
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-352233
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a
	                    minikube.k8s.io/name=newest-cni-352233
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_02T14_17_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 02 Nov 2025 14:17:15 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-352233
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 02 Nov 2025 14:17:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 02 Nov 2025 14:17:46 +0000   Sun, 02 Nov 2025 14:17:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 02 Nov 2025 14:17:46 +0000   Sun, 02 Nov 2025 14:17:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 02 Nov 2025 14:17:46 +0000   Sun, 02 Nov 2025 14:17:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 02 Nov 2025 14:17:46 +0000   Sun, 02 Nov 2025 14:17:11 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-352233
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                73c25d57-fdc2-428f-850a-0ced46336189
	  Boot ID:                    4c303cf4-c0b6-4c0d-aa1a-d7509feb15e7
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-352233                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         35s
	  kube-system                 kindnet-g4hrl                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-newest-cni-352233             250m (12%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-newest-cni-352233    200m (10%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-vbc2x                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-newest-cni-352233             100m (5%)     0 (0%)      0 (0%)           0 (0%)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 28s                kube-proxy       
	  Normal   Starting                 5s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  42s (x8 over 43s)  kubelet          Node newest-cni-352233 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    42s (x8 over 43s)  kubelet          Node newest-cni-352233 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     42s (x8 over 43s)  kubelet          Node newest-cni-352233 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    35s                kubelet          Node newest-cni-352233 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 35s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  35s                kubelet          Node newest-cni-352233 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     35s                kubelet          Node newest-cni-352233 status is now: NodeHasSufficientPID
	  Normal   Starting                 35s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           31s                node-controller  Node newest-cni-352233 event: Registered Node newest-cni-352233 in Controller
	  Normal   Starting                 16s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 16s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  15s (x8 over 16s)  kubelet          Node newest-cni-352233 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15s (x8 over 16s)  kubelet          Node newest-cni-352233 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15s (x8 over 16s)  kubelet          Node newest-cni-352233 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3s                 node-controller  Node newest-cni-352233 event: Registered Node newest-cni-352233 in Controller
	
	
	==> dmesg <==
	[Nov 2 13:57] overlayfs: idmapped layers are currently not supported
	[ +24.836033] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:58] overlayfs: idmapped layers are currently not supported
	[ +23.362553] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:59] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:01] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:02] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:03] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:04] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:06] overlayfs: idmapped layers are currently not supported
	[ +50.469589] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 2 14:07] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:08] overlayfs: idmapped layers are currently not supported
	[ +11.089512] overlayfs: idmapped layers are currently not supported
	[ +33.821233] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:09] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:10] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:11] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:13] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:14] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:15] overlayfs: idmapped layers are currently not supported
	[ +29.099512] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:16] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:17] overlayfs: idmapped layers are currently not supported
	[ +27.045568] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [63723b5d8f835727bc7bf49c43e0299d139270a34c176299be3ee3d496669f3e] <==
	{"level":"warn","ts":"2025-11-02T14:17:42.697900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:42.709142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:42.742740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:42.789769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:42.824026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:42.860404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:42.886988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:42.919727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:42.971655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:43.012088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:43.055525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:43.086921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:43.137744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:43.164702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:43.218112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:43.243081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:43.311674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:43.348461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:43.386435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:43.459850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:43.491110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:43.504164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:43.558447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:43.769072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52260","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-02T14:17:46.569547Z","caller":"traceutil/trace.go:172","msg":"trace[1025710601] transaction","detail":"{read_only:false; number_of_response:0; response_revision:437; }","duration":"108.253401ms","start":"2025-11-02T14:17:46.461278Z","end":"2025-11-02T14:17:46.569531Z","steps":["trace[1025710601] 'process raft request'  (duration: 76.048338ms)","trace[1025710601] 'compare'  (duration: 32.015096ms)"],"step_count":2}
	
	
	==> kernel <==
	 14:17:53 up  3:00,  0 user,  load average: 4.39, 3.72, 3.11
	Linux newest-cni-352233 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b09745ad56a666ec68ba8c088ae630cdadc73e5c7ee08f53db30186570e7ab63] <==
	I1102 14:17:46.833881       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1102 14:17:46.834261       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1102 14:17:46.834428       1 main.go:148] setting mtu 1500 for CNI 
	I1102 14:17:46.834441       1 main.go:178] kindnetd IP family: "ipv4"
	I1102 14:17:46.834455       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-02T14:17:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1102 14:17:47.025068       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1102 14:17:47.027252       1 controller.go:381] "Waiting for informer caches to sync"
	I1102 14:17:47.027343       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1102 14:17:47.028246       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [0604c0db8643e3b16f14589b7b9f60583c1a4a8b0e341130d1a39881dde8b5f7] <==
	I1102 14:17:45.918430       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1102 14:17:45.968406       1 aggregator.go:171] initial CRD sync complete...
	I1102 14:17:45.968431       1 autoregister_controller.go:144] Starting autoregister controller
	I1102 14:17:45.968439       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1102 14:17:46.050721       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1102 14:17:46.053008       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1102 14:17:46.053027       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1102 14:17:46.053147       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1102 14:17:46.054572       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1102 14:17:46.061145       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1102 14:17:46.081466       1 cache.go:39] Caches are synced for autoregister controller
	I1102 14:17:46.122096       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1102 14:17:46.184799       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1102 14:17:46.245805       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1102 14:17:46.286355       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1102 14:17:46.911233       1 controller.go:667] quota admission added evaluator for: namespaces
	I1102 14:17:47.074285       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1102 14:17:47.223415       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1102 14:17:47.259738       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1102 14:17:47.496418       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.154.77"}
	I1102 14:17:47.559201       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.249.17"}
	I1102 14:17:50.490443       1 controller.go:667] quota admission added evaluator for: endpoints
	I1102 14:17:50.665854       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1102 14:17:50.728243       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1102 14:17:50.774262       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [5f1604a94619feed1ff5c1e5544c063d94fd2c34695705ec31d387e5efb1050c] <==
	I1102 14:17:50.433688       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"newest-cni-352233\" does not exist"
	I1102 14:17:50.442086       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1102 14:17:50.434024       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1102 14:17:50.442074       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1102 14:17:50.442079       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1102 14:17:50.452863       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1102 14:17:50.454524       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1102 14:17:50.454565       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1102 14:17:50.454581       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1102 14:17:50.454586       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1102 14:17:50.454591       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1102 14:17:50.462726       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1102 14:17:50.462805       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1102 14:17:50.474792       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 14:17:50.475455       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1102 14:17:50.475545       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 14:17:50.475561       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1102 14:17:50.475582       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1102 14:17:50.482931       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1102 14:17:50.501645       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 14:17:50.518955       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1102 14:17:50.530873       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 14:17:50.530981       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1102 14:17:50.531933       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1102 14:17:50.546605       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	
	
	==> kube-proxy [0758ad546409c66b98af0fa14dc53e3893a7ce5029245e7776e9ae8c51ed9fde] <==
	I1102 14:17:46.839524       1 server_linux.go:53] "Using iptables proxy"
	I1102 14:17:47.104579       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1102 14:17:47.305238       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1102 14:17:47.305276       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1102 14:17:47.305355       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1102 14:17:47.594216       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 14:17:47.594851       1 server_linux.go:132] "Using iptables Proxier"
	I1102 14:17:47.736871       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1102 14:17:47.742027       1 server.go:527] "Version info" version="v1.34.1"
	I1102 14:17:47.765830       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 14:17:47.767401       1 config.go:200] "Starting service config controller"
	I1102 14:17:47.791634       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1102 14:17:47.791655       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1102 14:17:47.768087       1 config.go:309] "Starting node config controller"
	I1102 14:17:47.791695       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1102 14:17:47.791701       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1102 14:17:47.767696       1 config.go:106] "Starting endpoint slice config controller"
	I1102 14:17:47.791708       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1102 14:17:47.791714       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1102 14:17:47.767710       1 config.go:403] "Starting serviceCIDR config controller"
	I1102 14:17:47.791768       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1102 14:17:47.791772       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [e7f0e8be97d4590df69fdb907f0a84b9f17b797f501da038a7715f96824ef2cb] <==
	I1102 14:17:41.277909       1 serving.go:386] Generated self-signed cert in-memory
	W1102 14:17:45.442113       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1102 14:17:45.442152       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1102 14:17:45.442162       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1102 14:17:45.442170       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1102 14:17:45.876343       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1102 14:17:45.878128       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 14:17:45.897059       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1102 14:17:45.897184       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 14:17:45.897200       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 14:17:45.897220       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1102 14:17:45.998687       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1102 14:17:46.039980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1102 14:17:46.040094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1102 14:17:46.040174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1102 14:17:46.040244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1102 14:17:46.076167       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	
	
	==> kubelet <==
	Nov 02 14:17:45 newest-cni-352233 kubelet[757]: I1102 14:17:45.502894     757 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-352233"
	Nov 02 14:17:45 newest-cni-352233 kubelet[757]: I1102 14:17:45.893851     757 apiserver.go:52] "Watching apiserver"
	Nov 02 14:17:46 newest-cni-352233 kubelet[757]: I1102 14:17:46.110311     757 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 02 14:17:46 newest-cni-352233 kubelet[757]: I1102 14:17:46.156224     757 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-352233"
	Nov 02 14:17:46 newest-cni-352233 kubelet[757]: I1102 14:17:46.156336     757 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-352233"
	Nov 02 14:17:46 newest-cni-352233 kubelet[757]: I1102 14:17:46.156379     757 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 02 14:17:46 newest-cni-352233 kubelet[757]: I1102 14:17:46.159004     757 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 02 14:17:46 newest-cni-352233 kubelet[757]: I1102 14:17:46.178856     757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/380d63bc-7a9c-4abb-9747-04c37075e8b0-lib-modules\") pod \"kindnet-g4hrl\" (UID: \"380d63bc-7a9c-4abb-9747-04c37075e8b0\") " pod="kube-system/kindnet-g4hrl"
	Nov 02 14:17:46 newest-cni-352233 kubelet[757]: I1102 14:17:46.178932     757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2cec75f2-36fd-49c7-8644-941b68023b1b-xtables-lock\") pod \"kube-proxy-vbc2x\" (UID: \"2cec75f2-36fd-49c7-8644-941b68023b1b\") " pod="kube-system/kube-proxy-vbc2x"
	Nov 02 14:17:46 newest-cni-352233 kubelet[757]: I1102 14:17:46.178970     757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/380d63bc-7a9c-4abb-9747-04c37075e8b0-cni-cfg\") pod \"kindnet-g4hrl\" (UID: \"380d63bc-7a9c-4abb-9747-04c37075e8b0\") " pod="kube-system/kindnet-g4hrl"
	Nov 02 14:17:46 newest-cni-352233 kubelet[757]: I1102 14:17:46.179023     757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2cec75f2-36fd-49c7-8644-941b68023b1b-lib-modules\") pod \"kube-proxy-vbc2x\" (UID: \"2cec75f2-36fd-49c7-8644-941b68023b1b\") " pod="kube-system/kube-proxy-vbc2x"
	Nov 02 14:17:46 newest-cni-352233 kubelet[757]: I1102 14:17:46.179048     757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/380d63bc-7a9c-4abb-9747-04c37075e8b0-xtables-lock\") pod \"kindnet-g4hrl\" (UID: \"380d63bc-7a9c-4abb-9747-04c37075e8b0\") " pod="kube-system/kindnet-g4hrl"
	Nov 02 14:17:46 newest-cni-352233 kubelet[757]: E1102 14:17:46.199802     757 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-352233\" already exists" pod="kube-system/etcd-newest-cni-352233"
	Nov 02 14:17:46 newest-cni-352233 kubelet[757]: I1102 14:17:46.199988     757 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-352233"
	Nov 02 14:17:46 newest-cni-352233 kubelet[757]: E1102 14:17:46.301587     757 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-352233\" already exists" pod="kube-system/kube-apiserver-newest-cni-352233"
	Nov 02 14:17:46 newest-cni-352233 kubelet[757]: I1102 14:17:46.301629     757 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-352233"
	Nov 02 14:17:46 newest-cni-352233 kubelet[757]: I1102 14:17:46.302049     757 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 02 14:17:46 newest-cni-352233 kubelet[757]: E1102 14:17:46.451871     757 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-352233\" already exists" pod="kube-system/kube-controller-manager-newest-cni-352233"
	Nov 02 14:17:46 newest-cni-352233 kubelet[757]: I1102 14:17:46.451904     757 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-352233"
	Nov 02 14:17:46 newest-cni-352233 kubelet[757]: W1102 14:17:46.490190     757 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3dedeeb54f374fa01ce72bc2b41c87fc231415be9be044642fbcda8c122ba0ff/crio-e1fa8f5fb4b1cea176ee321c281d52ebda8ad208bb3aa0d83fd08aadfcbaa32b WatchSource:0}: Error finding container e1fa8f5fb4b1cea176ee321c281d52ebda8ad208bb3aa0d83fd08aadfcbaa32b: Status 404 returned error can't find the container with id e1fa8f5fb4b1cea176ee321c281d52ebda8ad208bb3aa0d83fd08aadfcbaa32b
	Nov 02 14:17:46 newest-cni-352233 kubelet[757]: W1102 14:17:46.565296     757 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3dedeeb54f374fa01ce72bc2b41c87fc231415be9be044642fbcda8c122ba0ff/crio-3f7e331cfbe6b091e6692d5baee4cc035411be27915fee94754d121b45f88a60 WatchSource:0}: Error finding container 3f7e331cfbe6b091e6692d5baee4cc035411be27915fee94754d121b45f88a60: Status 404 returned error can't find the container with id 3f7e331cfbe6b091e6692d5baee4cc035411be27915fee94754d121b45f88a60
	Nov 02 14:17:46 newest-cni-352233 kubelet[757]: E1102 14:17:46.584003     757 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-352233\" already exists" pod="kube-system/kube-scheduler-newest-cni-352233"
	Nov 02 14:17:50 newest-cni-352233 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 02 14:17:50 newest-cni-352233 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 02 14:17:50 newest-cni-352233 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-352233 -n newest-cni-352233
E1102 14:17:54.065937  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:17:54.072258  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:17:54.083578  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:17:54.105032  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:17:54.148152  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:17:54.230318  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-352233 -n newest-cni-352233: exit status 2 (511.654002ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-352233 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
E1102 14:17:54.392134  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:280: non-running pods: coredns-66bc5c9577-g4hfq storage-provisioner dashboard-metrics-scraper-6ffb444bf9-54n8n kubernetes-dashboard-855c9754f9-6b9k6
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-352233 describe pod coredns-66bc5c9577-g4hfq storage-provisioner dashboard-metrics-scraper-6ffb444bf9-54n8n kubernetes-dashboard-855c9754f9-6b9k6
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-352233 describe pod coredns-66bc5c9577-g4hfq storage-provisioner dashboard-metrics-scraper-6ffb444bf9-54n8n kubernetes-dashboard-855c9754f9-6b9k6: exit status 1 (93.331385ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-g4hfq" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-54n8n" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-6b9k6" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-352233 describe pod coredns-66bc5c9577-g4hfq storage-provisioner dashboard-metrics-scraper-6ffb444bf9-54n8n kubernetes-dashboard-855c9754f9-6b9k6: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-352233
helpers_test.go:243: (dbg) docker inspect newest-cni-352233:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3dedeeb54f374fa01ce72bc2b41c87fc231415be9be044642fbcda8c122ba0ff",
	        "Created": "2025-11-02T14:16:47.051560266Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 503347,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-02T14:17:29.922131765Z",
	            "FinishedAt": "2025-11-02T14:17:28.709868678Z"
	        },
	        "Image": "sha256:1b8004df0b408966a254b2ecd4551aa85aaac4627e7e9cb1cefc14dfe51ec273",
	        "ResolvConfPath": "/var/lib/docker/containers/3dedeeb54f374fa01ce72bc2b41c87fc231415be9be044642fbcda8c122ba0ff/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3dedeeb54f374fa01ce72bc2b41c87fc231415be9be044642fbcda8c122ba0ff/hostname",
	        "HostsPath": "/var/lib/docker/containers/3dedeeb54f374fa01ce72bc2b41c87fc231415be9be044642fbcda8c122ba0ff/hosts",
	        "LogPath": "/var/lib/docker/containers/3dedeeb54f374fa01ce72bc2b41c87fc231415be9be044642fbcda8c122ba0ff/3dedeeb54f374fa01ce72bc2b41c87fc231415be9be044642fbcda8c122ba0ff-json.log",
	        "Name": "/newest-cni-352233",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-352233:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-352233",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3dedeeb54f374fa01ce72bc2b41c87fc231415be9be044642fbcda8c122ba0ff",
	                "LowerDir": "/var/lib/docker/overlay2/6a465501cbe8e86cbcc859ba8574cb7d3d77365eeff8339b92edb281a1936040-init/diff:/var/lib/docker/overlay2/53058afe6acd7391f639f551e07f446f6a39a991b699aea18507cb21a1652e97/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6a465501cbe8e86cbcc859ba8574cb7d3d77365eeff8339b92edb281a1936040/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6a465501cbe8e86cbcc859ba8574cb7d3d77365eeff8339b92edb281a1936040/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6a465501cbe8e86cbcc859ba8574cb7d3d77365eeff8339b92edb281a1936040/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-352233",
	                "Source": "/var/lib/docker/volumes/newest-cni-352233/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-352233",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-352233",
	                "name.minikube.sigs.k8s.io": "newest-cni-352233",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "80eb9b71012d0b0c0a13f8c2a0b8e50f6b749c949da6d788503577c54c271f30",
	            "SandboxKey": "/var/run/docker/netns/80eb9b71012d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33466"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33467"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33470"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33468"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33469"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-352233": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:89:bf:93:c0:28",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f61df99b10f05d6b77aff7bd79b1aba98b765bd7b0b260056e12ed71f894662d",
	                    "EndpointID": "05eaeae73997db87455a7df74df6f5a4f70fdf7ce88475cb2913cdd41c761c2c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-352233",
	                        "3dedeeb54f37"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-352233 -n newest-cni-352233
E1102 14:17:54.714285  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-352233 -n newest-cni-352233: exit status 2 (482.215877ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-352233 logs -n 25
E1102 14:17:55.355531  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:17:55.974787  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p newest-cni-352233 logs -n 25: (1.482268876s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p default-k8s-diff-port-786183 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:14 UTC │ 02 Nov 25 14:15 UTC │
	│ addons  │ enable metrics-server -p embed-certs-955646 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:15 UTC │                     │
	│ stop    │ -p embed-certs-955646 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:15 UTC │ 02 Nov 25 14:15 UTC │
	│ addons  │ enable dashboard -p embed-certs-955646 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:15 UTC │ 02 Nov 25 14:15 UTC │
	│ start   │ -p embed-certs-955646 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:15 UTC │ 02 Nov 25 14:16 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-786183 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-786183 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │ 02 Nov 25 14:16 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-786183 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │ 02 Nov 25 14:16 UTC │
	│ start   │ -p default-k8s-diff-port-786183 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │ 02 Nov 25 14:17 UTC │
	│ image   │ embed-certs-955646 image list --format=json                                                                                                                                                                                                   │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │ 02 Nov 25 14:16 UTC │
	│ pause   │ -p embed-certs-955646 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │                     │
	│ delete  │ -p embed-certs-955646                                                                                                                                                                                                                         │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │ 02 Nov 25 14:16 UTC │
	│ delete  │ -p embed-certs-955646                                                                                                                                                                                                                         │ embed-certs-955646           │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │ 02 Nov 25 14:16 UTC │
	│ start   │ -p newest-cni-352233 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-352233            │ jenkins │ v1.37.0 │ 02 Nov 25 14:16 UTC │ 02 Nov 25 14:17 UTC │
	│ addons  │ enable metrics-server -p newest-cni-352233 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-352233            │ jenkins │ v1.37.0 │ 02 Nov 25 14:17 UTC │                     │
	│ stop    │ -p newest-cni-352233 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-352233            │ jenkins │ v1.37.0 │ 02 Nov 25 14:17 UTC │ 02 Nov 25 14:17 UTC │
	│ addons  │ enable dashboard -p newest-cni-352233 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-352233            │ jenkins │ v1.37.0 │ 02 Nov 25 14:17 UTC │ 02 Nov 25 14:17 UTC │
	│ start   │ -p newest-cni-352233 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-352233            │ jenkins │ v1.37.0 │ 02 Nov 25 14:17 UTC │ 02 Nov 25 14:17 UTC │
	│ image   │ default-k8s-diff-port-786183 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:17 UTC │ 02 Nov 25 14:17 UTC │
	│ pause   │ -p default-k8s-diff-port-786183 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:17 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-786183                                                                                                                                                                                                               │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:17 UTC │ 02 Nov 25 14:17 UTC │
	│ delete  │ -p default-k8s-diff-port-786183                                                                                                                                                                                                               │ default-k8s-diff-port-786183 │ jenkins │ v1.37.0 │ 02 Nov 25 14:17 UTC │ 02 Nov 25 14:17 UTC │
	│ start   │ -p auto-143736 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-143736                  │ jenkins │ v1.37.0 │ 02 Nov 25 14:17 UTC │                     │
	│ image   │ newest-cni-352233 image list --format=json                                                                                                                                                                                                    │ newest-cni-352233            │ jenkins │ v1.37.0 │ 02 Nov 25 14:17 UTC │ 02 Nov 25 14:17 UTC │
	│ pause   │ -p newest-cni-352233 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-352233            │ jenkins │ v1.37.0 │ 02 Nov 25 14:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 14:17:40
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 14:17:40.715191  506023 out.go:360] Setting OutFile to fd 1 ...
	I1102 14:17:40.715311  506023 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:17:40.715317  506023 out.go:374] Setting ErrFile to fd 2...
	I1102 14:17:40.715322  506023 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:17:40.715662  506023 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 14:17:40.716133  506023 out.go:368] Setting JSON to false
	I1102 14:17:40.717065  506023 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10813,"bootTime":1762082248,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1102 14:17:40.717148  506023 start.go:143] virtualization:  
	I1102 14:17:40.721180  506023 out.go:179] * [auto-143736] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1102 14:17:40.725823  506023 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 14:17:40.725866  506023 notify.go:221] Checking for updates...
	I1102 14:17:40.732939  506023 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 14:17:40.736044  506023 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-293314/kubeconfig
	I1102 14:17:40.739304  506023 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-293314/.minikube
	I1102 14:17:40.742394  506023 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1102 14:17:40.745393  506023 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 14:17:40.748929  506023 config.go:182] Loaded profile config "newest-cni-352233": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:17:40.749036  506023 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 14:17:40.802710  506023 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1102 14:17:40.802852  506023 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:17:40.912373  506023 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-02 14:17:40.902468217 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:17:40.912503  506023 docker.go:319] overlay module found
	I1102 14:17:40.915737  506023 out.go:179] * Using the docker driver based on user configuration
	I1102 14:17:40.918721  506023 start.go:309] selected driver: docker
	I1102 14:17:40.918742  506023 start.go:930] validating driver "docker" against <nil>
	I1102 14:17:40.918758  506023 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 14:17:40.919495  506023 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:17:41.014705  506023 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-02 14:17:41.003601505 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:17:41.014877  506023 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1102 14:17:41.015123  506023 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1102 14:17:41.018211  506023 out.go:179] * Using Docker driver with root privileges
	I1102 14:17:41.021916  506023 cni.go:84] Creating CNI manager for ""
	I1102 14:17:41.021996  506023 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 14:17:41.022012  506023 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1102 14:17:41.022094  506023 start.go:353] cluster config:
	{Name:auto-143736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-143736 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1102 14:17:41.025774  506023 out.go:179] * Starting "auto-143736" primary control-plane node in "auto-143736" cluster
	I1102 14:17:41.028568  506023 cache.go:124] Beginning downloading kic base image for docker with crio
	I1102 14:17:41.031437  506023 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1102 14:17:41.034191  506023 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 14:17:41.034247  506023 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
	I1102 14:17:41.034260  506023 cache.go:59] Caching tarball of preloaded images
	I1102 14:17:41.034389  506023 preload.go:233] Found /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1102 14:17:41.034405  506023 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1102 14:17:41.034512  506023 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/auto-143736/config.json ...
	I1102 14:17:41.034536  506023 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/auto-143736/config.json: {Name:mkeb49beb903906fd811b6180bfceaf5d3d55462 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 14:17:41.034723  506023 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1102 14:17:41.069566  506023 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1102 14:17:41.069596  506023 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1102 14:17:41.069610  506023 cache.go:233] Successfully downloaded all kic artifacts
	I1102 14:17:41.069640  506023 start.go:360] acquireMachinesLock for auto-143736: {Name:mk583c10954ec76136345a56d1c6b54d3bd52999 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1102 14:17:41.069752  506023 start.go:364] duration metric: took 91.464µs to acquireMachinesLock for "auto-143736"
	I1102 14:17:41.069784  506023 start.go:93] Provisioning new machine with config: &{Name:auto-143736 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-143736 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1102 14:17:41.069860  506023 start.go:125] createHost starting for "" (driver="docker")
	I1102 14:17:39.590173  503184 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1102 14:17:39.590251  503184 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1102 14:17:39.701034  503184 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1102 14:17:39.701110  503184 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1102 14:17:39.778901  503184 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1102 14:17:39.778981  503184 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1102 14:17:39.810885  503184 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1102 14:17:39.810962  503184 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1102 14:17:39.876962  503184 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1102 14:17:39.877037  503184 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1102 14:17:39.947581  503184 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 14:17:39.947662  503184 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1102 14:17:40.006285  503184 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1102 14:17:41.073214  506023 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1102 14:17:41.073451  506023 start.go:159] libmachine.API.Create for "auto-143736" (driver="docker")
	I1102 14:17:41.073498  506023 client.go:173] LocalClient.Create starting
	I1102 14:17:41.073591  506023 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem
	I1102 14:17:41.073631  506023 main.go:143] libmachine: Decoding PEM data...
	I1102 14:17:41.073649  506023 main.go:143] libmachine: Parsing certificate...
	I1102 14:17:41.073705  506023 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem
	I1102 14:17:41.073732  506023 main.go:143] libmachine: Decoding PEM data...
	I1102 14:17:41.073747  506023 main.go:143] libmachine: Parsing certificate...
	I1102 14:17:41.074112  506023 cli_runner.go:164] Run: docker network inspect auto-143736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1102 14:17:41.106823  506023 cli_runner.go:211] docker network inspect auto-143736 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1102 14:17:41.106901  506023 network_create.go:284] running [docker network inspect auto-143736] to gather additional debugging logs...
	I1102 14:17:41.106927  506023 cli_runner.go:164] Run: docker network inspect auto-143736
	W1102 14:17:41.144892  506023 cli_runner.go:211] docker network inspect auto-143736 returned with exit code 1
	I1102 14:17:41.144922  506023 network_create.go:287] error running [docker network inspect auto-143736]: docker network inspect auto-143736: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-143736 not found
	I1102 14:17:41.144953  506023 network_create.go:289] output of [docker network inspect auto-143736]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-143736 not found
	
	** /stderr **
	I1102 14:17:41.145056  506023 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1102 14:17:41.181265  506023 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ddf319108ac9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:66:f7:2d:49:67:ff} reservation:<nil>}
	I1102 14:17:41.181640  506023 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-30b945568040 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:b6:b2:b0:cb:49:d7} reservation:<nil>}
	I1102 14:17:41.181893  506023 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d23a3a2e266d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fa:42:95:8e:ae:52} reservation:<nil>}
	I1102 14:17:41.182291  506023 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400197f3b0}
	I1102 14:17:41.182308  506023 network_create.go:124] attempt to create docker network auto-143736 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1102 14:17:41.182384  506023 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-143736 auto-143736
	I1102 14:17:41.289462  506023 network_create.go:108] docker network auto-143736 192.168.76.0/24 created
	I1102 14:17:41.289492  506023 kic.go:121] calculated static IP "192.168.76.2" for the "auto-143736" container
	I1102 14:17:41.289573  506023 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1102 14:17:41.313312  506023 cli_runner.go:164] Run: docker volume create auto-143736 --label name.minikube.sigs.k8s.io=auto-143736 --label created_by.minikube.sigs.k8s.io=true
	I1102 14:17:41.340414  506023 oci.go:103] Successfully created a docker volume auto-143736
	I1102 14:17:41.340504  506023 cli_runner.go:164] Run: docker run --rm --name auto-143736-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-143736 --entrypoint /usr/bin/test -v auto-143736:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1102 14:17:42.042349  506023 oci.go:107] Successfully prepared a docker volume auto-143736
	I1102 14:17:42.042392  506023 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1102 14:17:42.042412  506023 kic.go:194] Starting extracting preloaded images to volume ...
	I1102 14:17:42.042475  506023 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-143736:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1102 14:17:47.347229  503184 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.094207751s)
	I1102 14:17:47.347293  503184 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (8.068380604s)
	I1102 14:17:47.347331  503184 api_server.go:52] waiting for apiserver process to appear ...
	I1102 14:17:47.347391  503184 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 14:17:47.347475  503184 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.997728527s)
	I1102 14:17:47.572189  503184 api_server.go:72] duration metric: took 8.878504905s to wait for apiserver process to appear ...
	I1102 14:17:47.572213  503184 api_server.go:88] waiting for apiserver healthz status ...
	I1102 14:17:47.572232  503184 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1102 14:17:47.573114  503184 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.566724314s)
	I1102 14:17:47.576225  503184 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-352233 addons enable metrics-server
	
	I1102 14:17:47.580809  503184 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1102 14:17:47.583890  503184 addons.go:515] duration metric: took 8.889838967s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1102 14:17:47.586112  503184 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1102 14:17:47.586135  503184 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1102 14:17:48.072353  503184 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1102 14:17:48.102885  503184 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1102 14:17:48.106404  503184 api_server.go:141] control plane version: v1.34.1
	I1102 14:17:48.106436  503184 api_server.go:131] duration metric: took 534.215796ms to wait for apiserver health ...
	I1102 14:17:48.106446  503184 system_pods.go:43] waiting for kube-system pods to appear ...
	I1102 14:17:48.122410  503184 system_pods.go:59] 8 kube-system pods found
	I1102 14:17:48.122449  503184 system_pods.go:61] "coredns-66bc5c9577-g4hfq" [249838ab-df11-4a0c-a2ef-a1b05a0e2660] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1102 14:17:48.122466  503184 system_pods.go:61] "etcd-newest-cni-352233" [d797a2fa-d3f0-4180-88c1-5417b262b322] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1102 14:17:48.122472  503184 system_pods.go:61] "kindnet-g4hrl" [380d63bc-7a9c-4abb-9747-04c37075e8b0] Running
	I1102 14:17:48.122481  503184 system_pods.go:61] "kube-apiserver-newest-cni-352233" [ea69c744-43cb-4464-9da8-0768bd8820b1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1102 14:17:48.122487  503184 system_pods.go:61] "kube-controller-manager-newest-cni-352233" [5e48eb79-6be2-4f01-99bc-be7c2f15d45a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1102 14:17:48.122492  503184 system_pods.go:61] "kube-proxy-vbc2x" [2cec75f2-36fd-49c7-8644-941b68023b1b] Running
	I1102 14:17:48.122499  503184 system_pods.go:61] "kube-scheduler-newest-cni-352233" [33bacc9d-8a05-403d-865f-ba237a6aa780] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1102 14:17:48.122505  503184 system_pods.go:61] "storage-provisioner" [c94e5e11-33b4-4d32-9bbc-fa8e510911a5] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1102 14:17:48.122511  503184 system_pods.go:74] duration metric: took 16.05902ms to wait for pod list to return data ...
	I1102 14:17:48.122521  503184 default_sa.go:34] waiting for default service account to be created ...
	I1102 14:17:48.130337  503184 default_sa.go:45] found service account: "default"
	I1102 14:17:48.130363  503184 default_sa.go:55] duration metric: took 7.836417ms for default service account to be created ...
	I1102 14:17:48.130378  503184 kubeadm.go:587] duration metric: took 9.436696096s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1102 14:17:48.130394  503184 node_conditions.go:102] verifying NodePressure condition ...
	I1102 14:17:48.138807  503184 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1102 14:17:48.138889  503184 node_conditions.go:123] node cpu capacity is 2
	I1102 14:17:48.138919  503184 node_conditions.go:105] duration metric: took 8.519279ms to run NodePressure ...
	I1102 14:17:48.138960  503184 start.go:242] waiting for startup goroutines ...
	I1102 14:17:48.138986  503184 start.go:247] waiting for cluster config update ...
	I1102 14:17:48.139012  503184 start.go:256] writing updated cluster config ...
	I1102 14:17:48.139400  503184 ssh_runner.go:195] Run: rm -f paused
	I1102 14:17:48.332906  503184 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1102 14:17:48.336765  503184 out.go:179] * Done! kubectl is now configured to use "newest-cni-352233" cluster and "default" namespace by default
	I1102 14:17:46.972207  506023 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v auto-143736:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.929697046s)
	I1102 14:17:46.972234  506023 kic.go:203] duration metric: took 4.929819402s to extract preloaded images to volume ...
	W1102 14:17:46.972369  506023 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1102 14:17:46.972468  506023 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1102 14:17:47.097688  506023 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-143736 --name auto-143736 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-143736 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-143736 --network auto-143736 --ip 192.168.76.2 --volume auto-143736:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1102 14:17:47.508400  506023 cli_runner.go:164] Run: docker container inspect auto-143736 --format={{.State.Running}}
	I1102 14:17:47.530580  506023 cli_runner.go:164] Run: docker container inspect auto-143736 --format={{.State.Status}}
	I1102 14:17:47.554935  506023 cli_runner.go:164] Run: docker exec auto-143736 stat /var/lib/dpkg/alternatives/iptables
	I1102 14:17:47.627058  506023 oci.go:144] the created container "auto-143736" has a running status.
	I1102 14:17:47.627092  506023 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21808-293314/.minikube/machines/auto-143736/id_rsa...
	I1102 14:17:48.770100  506023 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21808-293314/.minikube/machines/auto-143736/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1102 14:17:48.796053  506023 cli_runner.go:164] Run: docker container inspect auto-143736 --format={{.State.Status}}
	I1102 14:17:48.826230  506023 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1102 14:17:48.826248  506023 kic_runner.go:114] Args: [docker exec --privileged auto-143736 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1102 14:17:48.890520  506023 cli_runner.go:164] Run: docker container inspect auto-143736 --format={{.State.Status}}
	I1102 14:17:48.914597  506023 machine.go:94] provisionDockerMachine start ...
	I1102 14:17:48.914710  506023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-143736
	I1102 14:17:48.949155  506023 main.go:143] libmachine: Using SSH client type: native
	I1102 14:17:48.949482  506023 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33471 <nil> <nil>}
	I1102 14:17:48.949491  506023 main.go:143] libmachine: About to run SSH command:
	hostname
	I1102 14:17:49.186277  506023 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-143736
	
	I1102 14:17:49.186319  506023 ubuntu.go:182] provisioning hostname "auto-143736"
	I1102 14:17:49.186386  506023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-143736
	I1102 14:17:49.217226  506023 main.go:143] libmachine: Using SSH client type: native
	I1102 14:17:49.217543  506023 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33471 <nil> <nil>}
	I1102 14:17:49.217647  506023 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-143736 && echo "auto-143736" | sudo tee /etc/hostname
	I1102 14:17:49.489508  506023 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-143736
	
	I1102 14:17:49.489584  506023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-143736
	I1102 14:17:49.512627  506023 main.go:143] libmachine: Using SSH client type: native
	I1102 14:17:49.512932  506023 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33471 <nil> <nil>}
	I1102 14:17:49.512949  506023 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-143736' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-143736/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-143736' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1102 14:17:49.724092  506023 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1102 14:17:49.724120  506023 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21808-293314/.minikube CaCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21808-293314/.minikube}
	I1102 14:17:49.724146  506023 ubuntu.go:190] setting up certificates
	I1102 14:17:49.724157  506023 provision.go:84] configureAuth start
	I1102 14:17:49.724215  506023 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-143736
	I1102 14:17:49.758899  506023 provision.go:143] copyHostCerts
	I1102 14:17:49.758970  506023 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem, removing ...
	I1102 14:17:49.758984  506023 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem
	I1102 14:17:49.759689  506023 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/ca.pem (1082 bytes)
	I1102 14:17:49.759807  506023 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem, removing ...
	I1102 14:17:49.761102  506023 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem
	I1102 14:17:49.761189  506023 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/cert.pem (1123 bytes)
	I1102 14:17:49.761292  506023 exec_runner.go:144] found /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem, removing ...
	I1102 14:17:49.761304  506023 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem
	I1102 14:17:49.761333  506023 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21808-293314/.minikube/key.pem (1675 bytes)
	I1102 14:17:49.761399  506023 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem org=jenkins.auto-143736 san=[127.0.0.1 192.168.76.2 auto-143736 localhost minikube]
	I1102 14:17:50.391506  506023 provision.go:177] copyRemoteCerts
	I1102 14:17:50.391619  506023 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1102 14:17:50.391679  506023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-143736
	I1102 14:17:50.409443  506023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/auto-143736/id_rsa Username:docker}
	I1102 14:17:50.524355  506023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1102 14:17:50.557330  506023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1102 14:17:50.595200  506023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1102 14:17:50.629445  506023 provision.go:87] duration metric: took 905.263416ms to configureAuth
	I1102 14:17:50.629472  506023 ubuntu.go:206] setting minikube options for container-runtime
	I1102 14:17:50.629668  506023 config.go:182] Loaded profile config "auto-143736": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:17:50.629784  506023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-143736
	I1102 14:17:50.661847  506023 main.go:143] libmachine: Using SSH client type: native
	I1102 14:17:50.662180  506023 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef110] 0x3f18d0 <nil>  [] 0s} 127.0.0.1 33471 <nil> <nil>}
	I1102 14:17:50.662195  506023 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1102 14:17:50.999998  506023 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1102 14:17:51.000062  506023 machine.go:97] duration metric: took 2.085432854s to provisionDockerMachine
	I1102 14:17:51.000095  506023 client.go:176] duration metric: took 9.926583889s to LocalClient.Create
	I1102 14:17:51.000159  506023 start.go:167] duration metric: took 9.926709626s to libmachine.API.Create "auto-143736"
	I1102 14:17:51.000188  506023 start.go:293] postStartSetup for "auto-143736" (driver="docker")
	I1102 14:17:51.000229  506023 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1102 14:17:51.000325  506023 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1102 14:17:51.000419  506023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-143736
	I1102 14:17:51.019834  506023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/auto-143736/id_rsa Username:docker}
	I1102 14:17:51.135321  506023 ssh_runner.go:195] Run: cat /etc/os-release
	I1102 14:17:51.140346  506023 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1102 14:17:51.140379  506023 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1102 14:17:51.140392  506023 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-293314/.minikube/addons for local assets ...
	I1102 14:17:51.140449  506023 filesync.go:126] Scanning /home/jenkins/minikube-integration/21808-293314/.minikube/files for local assets ...
	I1102 14:17:51.140553  506023 filesync.go:149] local asset: /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem -> 2951742.pem in /etc/ssl/certs
	I1102 14:17:51.140671  506023 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1102 14:17:51.153373  506023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem --> /etc/ssl/certs/2951742.pem (1708 bytes)
	I1102 14:17:51.186823  506023 start.go:296] duration metric: took 186.603344ms for postStartSetup
	I1102 14:17:51.187197  506023 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-143736
	I1102 14:17:51.208335  506023 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/auto-143736/config.json ...
	I1102 14:17:51.208609  506023 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 14:17:51.208658  506023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-143736
	I1102 14:17:51.233323  506023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/auto-143736/id_rsa Username:docker}
	I1102 14:17:51.339608  506023 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1102 14:17:51.344186  506023 start.go:128] duration metric: took 10.274311499s to createHost
	I1102 14:17:51.344212  506023 start.go:83] releasing machines lock for "auto-143736", held for 10.274445548s
	I1102 14:17:51.344282  506023 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-143736
	I1102 14:17:51.361590  506023 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem (1338 bytes)
	W1102 14:17:51.361648  506023 certs.go:480] ignoring /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174_empty.pem, impossibly tiny 0 bytes
	I1102 14:17:51.361658  506023 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca-key.pem (1679 bytes)
	I1102 14:17:51.361685  506023 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/ca.pem (1082 bytes)
	I1102 14:17:51.361718  506023 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/cert.pem (1123 bytes)
	I1102 14:17:51.361743  506023 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/certs/key.pem (1675 bytes)
	I1102 14:17:51.361790  506023 certs.go:484] found cert: /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem (1708 bytes)
	I1102 14:17:51.361860  506023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/ssl/certs/2951742.pem --> /usr/share/ca-certificates/2951742.pem (1708 bytes)
	I1102 14:17:51.361919  506023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-143736
	I1102 14:17:51.378865  506023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33471 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/auto-143736/id_rsa Username:docker}
	I1102 14:17:51.493816  506023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1102 14:17:51.511813  506023 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21808-293314/.minikube/certs/295174.pem --> /usr/share/ca-certificates/295174.pem (1338 bytes)
	I1102 14:17:51.528518  506023 ssh_runner.go:195] Run: openssl version
	I1102 14:17:51.534944  506023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2951742.pem && ln -fs /usr/share/ca-certificates/2951742.pem /etc/ssl/certs/2951742.pem"
	I1102 14:17:51.543022  506023 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2951742.pem
	I1102 14:17:51.546446  506023 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  2 13:20 /usr/share/ca-certificates/2951742.pem
	I1102 14:17:51.546501  506023 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2951742.pem
	I1102 14:17:51.587568  506023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2951742.pem /etc/ssl/certs/3ec20f2e.0"
	I1102 14:17:51.596216  506023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1102 14:17:51.604634  506023 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:17:51.609042  506023 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  2 13:13 /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:17:51.609145  506023 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1102 14:17:51.655686  506023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1102 14:17:51.664520  506023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/295174.pem && ln -fs /usr/share/ca-certificates/295174.pem /etc/ssl/certs/295174.pem"
	I1102 14:17:51.678062  506023 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/295174.pem
	I1102 14:17:51.684669  506023 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  2 13:20 /usr/share/ca-certificates/295174.pem
	I1102 14:17:51.684813  506023 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/295174.pem
	I1102 14:17:51.743956  506023 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/295174.pem /etc/ssl/certs/51391683.0"
	I1102 14:17:51.753260  506023 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-certificates >/dev/null 2>&1 && sudo update-ca-certificates || true"
	I1102 14:17:51.756922  506023 ssh_runner.go:195] Run: /bin/sh -c "command -v update-ca-trust >/dev/null 2>&1 && sudo update-ca-trust extract || true"
	I1102 14:17:51.760588  506023 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1102 14:17:51.760666  506023 ssh_runner.go:195] Run: cat /version.json
	I1102 14:17:51.861278  506023 ssh_runner.go:195] Run: systemctl --version
	I1102 14:17:51.870158  506023 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1102 14:17:51.918868  506023 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1102 14:17:51.923842  506023 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1102 14:17:51.923936  506023 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1102 14:17:51.974207  506023 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1102 14:17:51.974232  506023 start.go:496] detecting cgroup driver to use...
	I1102 14:17:51.974263  506023 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1102 14:17:51.974324  506023 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1102 14:17:51.997093  506023 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1102 14:17:52.015771  506023 docker.go:218] disabling cri-docker service (if available) ...
	I1102 14:17:52.015838  506023 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1102 14:17:52.037380  506023 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1102 14:17:52.058432  506023 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1102 14:17:52.218729  506023 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1102 14:17:52.380021  506023 docker.go:234] disabling docker service ...
	I1102 14:17:52.380082  506023 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1102 14:17:52.417095  506023 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1102 14:17:52.432859  506023 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1102 14:17:52.583155  506023 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1102 14:17:52.752791  506023 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1102 14:17:52.768651  506023 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1102 14:17:52.782699  506023 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1102 14:17:52.782771  506023 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:17:52.791713  506023 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1102 14:17:52.791780  506023 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:17:52.801857  506023 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:17:52.814716  506023 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:17:52.831123  506023 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1102 14:17:52.840978  506023 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:17:52.857395  506023 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:17:52.878661  506023 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1102 14:17:52.892528  506023 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1102 14:17:52.902857  506023 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1102 14:17:52.911521  506023 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1102 14:17:53.080273  506023 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1102 14:17:53.240746  506023 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1102 14:17:53.240927  506023 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1102 14:17:53.245400  506023 start.go:564] Will wait 60s for crictl version
	I1102 14:17:53.245502  506023 ssh_runner.go:195] Run: which crictl
	I1102 14:17:53.251225  506023 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1102 14:17:53.291654  506023 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1102 14:17:53.291809  506023 ssh_runner.go:195] Run: crio --version
	I1102 14:17:53.329354  506023 ssh_runner.go:195] Run: crio --version
	I1102 14:17:53.371231  506023 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	
	==> CRI-O <==
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.429486978Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.474892502Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=75228de2-9e36-460a-9f61-8a6f7ae31f64 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.484717625Z" level=info msg="Running pod sandbox: kube-system/kindnet-g4hrl/POD" id=b33b4c09-6e32-4a49-8041-2b7955c45584 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.484795041Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.497188281Z" level=info msg="Ran pod sandbox e1fa8f5fb4b1cea176ee321c281d52ebda8ad208bb3aa0d83fd08aadfcbaa32b with infra container: kube-system/kube-proxy-vbc2x/POD" id=75228de2-9e36-460a-9f61-8a6f7ae31f64 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.498497522Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=58c4fc7a-ea5a-423b-8e67-07b424c446e2 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.520359245Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=a2029648-cf2c-4ba7-a5cd-e708850537d0 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.531099406Z" level=info msg="Creating container: kube-system/kube-proxy-vbc2x/kube-proxy" id=b0a570c5-7b9f-40fc-abf6-5db04548a961 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.531224929Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.532924951Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=b33b4c09-6e32-4a49-8041-2b7955c45584 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.567074224Z" level=info msg="Ran pod sandbox 3f7e331cfbe6b091e6692d5baee4cc035411be27915fee94754d121b45f88a60 with infra container: kube-system/kindnet-g4hrl/POD" id=b33b4c09-6e32-4a49-8041-2b7955c45584 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.569711577Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=e73c8038-0025-40f1-96e2-22e4feab9442 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.579258921Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=95cc3912-9922-4b84-9f44-6c1cebd4b117 name=/runtime.v1.ImageService/ImageStatus
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.581332682Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.584530542Z" level=info msg="Creating container: kube-system/kindnet-g4hrl/kindnet-cni" id=95acf843-fc55-4078-ba77-8b4affe4c510 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.589048539Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.592619015Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.638723648Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.639213778Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.659857667Z" level=info msg="Created container 0758ad546409c66b98af0fa14dc53e3893a7ce5029245e7776e9ae8c51ed9fde: kube-system/kube-proxy-vbc2x/kube-proxy" id=b0a570c5-7b9f-40fc-abf6-5db04548a961 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.660661959Z" level=info msg="Starting container: 0758ad546409c66b98af0fa14dc53e3893a7ce5029245e7776e9ae8c51ed9fde" id=4ab161b7-8b57-4080-8b9e-a19a0767cdb7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.671540139Z" level=info msg="Started container" PID=1090 containerID=0758ad546409c66b98af0fa14dc53e3893a7ce5029245e7776e9ae8c51ed9fde description=kube-system/kube-proxy-vbc2x/kube-proxy id=4ab161b7-8b57-4080-8b9e-a19a0767cdb7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e1fa8f5fb4b1cea176ee321c281d52ebda8ad208bb3aa0d83fd08aadfcbaa32b
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.716070438Z" level=info msg="Created container b09745ad56a666ec68ba8c088ae630cdadc73e5c7ee08f53db30186570e7ab63: kube-system/kindnet-g4hrl/kindnet-cni" id=95acf843-fc55-4078-ba77-8b4affe4c510 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.71686787Z" level=info msg="Starting container: b09745ad56a666ec68ba8c088ae630cdadc73e5c7ee08f53db30186570e7ab63" id=cd09921d-85b2-4d34-b5c8-9322a01e995a name=/runtime.v1.RuntimeService/StartContainer
	Nov 02 14:17:46 newest-cni-352233 crio[643]: time="2025-11-02T14:17:46.718756891Z" level=info msg="Started container" PID=1095 containerID=b09745ad56a666ec68ba8c088ae630cdadc73e5c7ee08f53db30186570e7ab63 description=kube-system/kindnet-g4hrl/kindnet-cni id=cd09921d-85b2-4d34-b5c8-9322a01e995a name=/runtime.v1.RuntimeService/StartContainer sandboxID=3f7e331cfbe6b091e6692d5baee4cc035411be27915fee94754d121b45f88a60
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	b09745ad56a66       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   9 seconds ago       Running             kindnet-cni               1                   3f7e331cfbe6b       kindnet-g4hrl                               kube-system
	0758ad546409c       05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9   9 seconds ago       Running             kube-proxy                1                   e1fa8f5fb4b1c       kube-proxy-vbc2x                            kube-system
	5f1604a94619f       7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a   17 seconds ago      Running             kube-controller-manager   1                   b23a1e074749f       kube-controller-manager-newest-cni-352233   kube-system
	e7f0e8be97d45       b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0   17 seconds ago      Running             kube-scheduler            1                   68492703c8525       kube-scheduler-newest-cni-352233            kube-system
	0604c0db8643e       43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196   17 seconds ago      Running             kube-apiserver            1                   6ad5a2daa9c70       kube-apiserver-newest-cni-352233            kube-system
	63723b5d8f835       a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e   17 seconds ago      Running             etcd                      1                   a3196d8dfa138       etcd-newest-cni-352233                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-352233
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=newest-cni-352233
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=170a9221ec214abbddb4c7cdac340516a92b239a
	                    minikube.k8s.io/name=newest-cni-352233
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_02T14_17_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 02 Nov 2025 14:17:15 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-352233
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 02 Nov 2025 14:17:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 02 Nov 2025 14:17:46 +0000   Sun, 02 Nov 2025 14:17:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 02 Nov 2025 14:17:46 +0000   Sun, 02 Nov 2025 14:17:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 02 Nov 2025 14:17:46 +0000   Sun, 02 Nov 2025 14:17:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 02 Nov 2025 14:17:46 +0000   Sun, 02 Nov 2025 14:17:11 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-352233
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 ef38fbc8889a0e5f09e9dc0868f5cd19
	  System UUID:                73c25d57-fdc2-428f-850a-0ced46336189
	  Boot ID:                    4c303cf4-c0b6-4c0d-aa1a-d7509feb15e7
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-352233                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         38s
	  kube-system                 kindnet-g4hrl                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      33s
	  kube-system                 kube-apiserver-newest-cni-352233             250m (12%)    0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-controller-manager-newest-cni-352233    200m (10%)    0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-proxy-vbc2x                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-scheduler-newest-cni-352233             100m (5%)     0 (0%)      0 (0%)           0 (0%)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 30s                kube-proxy       
	  Normal   Starting                 8s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  45s (x8 over 46s)  kubelet          Node newest-cni-352233 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    45s (x8 over 46s)  kubelet          Node newest-cni-352233 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     45s (x8 over 46s)  kubelet          Node newest-cni-352233 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    38s                kubelet          Node newest-cni-352233 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 38s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  38s                kubelet          Node newest-cni-352233 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     38s                kubelet          Node newest-cni-352233 status is now: NodeHasSufficientPID
	  Normal   Starting                 38s                kubelet          Starting kubelet.
	  Normal   RegisteredNode           34s                node-controller  Node newest-cni-352233 event: Registered Node newest-cni-352233 in Controller
	  Normal   Starting                 19s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 19s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  18s (x8 over 19s)  kubelet          Node newest-cni-352233 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18s (x8 over 19s)  kubelet          Node newest-cni-352233 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18s (x8 over 19s)  kubelet          Node newest-cni-352233 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6s                 node-controller  Node newest-cni-352233 event: Registered Node newest-cni-352233 in Controller
	
	
	==> dmesg <==
	[Nov 2 13:57] overlayfs: idmapped layers are currently not supported
	[ +24.836033] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:58] overlayfs: idmapped layers are currently not supported
	[ +23.362553] overlayfs: idmapped layers are currently not supported
	[Nov 2 13:59] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:01] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:02] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:03] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:04] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:06] overlayfs: idmapped layers are currently not supported
	[ +50.469589] kauditd_printk_skb: 8 callbacks suppressed
	[Nov 2 14:07] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:08] overlayfs: idmapped layers are currently not supported
	[ +11.089512] overlayfs: idmapped layers are currently not supported
	[ +33.821233] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:09] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:10] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:11] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:13] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:14] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:15] overlayfs: idmapped layers are currently not supported
	[ +29.099512] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:16] overlayfs: idmapped layers are currently not supported
	[Nov 2 14:17] overlayfs: idmapped layers are currently not supported
	[ +27.045568] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [63723b5d8f835727bc7bf49c43e0299d139270a34c176299be3ee3d496669f3e] <==
	{"level":"warn","ts":"2025-11-02T14:17:42.697900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:42.709142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:42.742740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:42.789769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:42.824026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:42.860404Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:42.886988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:42.919727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:42.971655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51996","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:43.012088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:43.055525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:43.086921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:43.137744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:43.164702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:43.218112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:43.243081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:43.311674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52140","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:43.348461Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:43.386435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:43.459850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:43.491110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:43.504164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:43.558447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-02T14:17:43.769072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52260","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-02T14:17:46.569547Z","caller":"traceutil/trace.go:172","msg":"trace[1025710601] transaction","detail":"{read_only:false; number_of_response:0; response_revision:437; }","duration":"108.253401ms","start":"2025-11-02T14:17:46.461278Z","end":"2025-11-02T14:17:46.569531Z","steps":["trace[1025710601] 'process raft request'  (duration: 76.048338ms)","trace[1025710601] 'compare'  (duration: 32.015096ms)"],"step_count":2}
	
	
	==> kernel <==
	 14:17:56 up  3:00,  0 user,  load average: 4.39, 3.72, 3.11
	Linux newest-cni-352233 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b09745ad56a666ec68ba8c088ae630cdadc73e5c7ee08f53db30186570e7ab63] <==
	I1102 14:17:46.833881       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1102 14:17:46.834261       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1102 14:17:46.834428       1 main.go:148] setting mtu 1500 for CNI 
	I1102 14:17:46.834441       1 main.go:178] kindnetd IP family: "ipv4"
	I1102 14:17:46.834455       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-02T14:17:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1102 14:17:47.025068       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1102 14:17:47.027252       1 controller.go:381] "Waiting for informer caches to sync"
	I1102 14:17:47.027343       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1102 14:17:47.028246       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [0604c0db8643e3b16f14589b7b9f60583c1a4a8b0e341130d1a39881dde8b5f7] <==
	I1102 14:17:45.918430       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1102 14:17:45.968406       1 aggregator.go:171] initial CRD sync complete...
	I1102 14:17:45.968431       1 autoregister_controller.go:144] Starting autoregister controller
	I1102 14:17:45.968439       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1102 14:17:46.050721       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1102 14:17:46.053008       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1102 14:17:46.053027       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1102 14:17:46.053147       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1102 14:17:46.054572       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1102 14:17:46.061145       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1102 14:17:46.081466       1 cache.go:39] Caches are synced for autoregister controller
	I1102 14:17:46.122096       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1102 14:17:46.184799       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1102 14:17:46.245805       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1102 14:17:46.286355       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1102 14:17:46.911233       1 controller.go:667] quota admission added evaluator for: namespaces
	I1102 14:17:47.074285       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1102 14:17:47.223415       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1102 14:17:47.259738       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1102 14:17:47.496418       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.154.77"}
	I1102 14:17:47.559201       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.249.17"}
	I1102 14:17:50.490443       1 controller.go:667] quota admission added evaluator for: endpoints
	I1102 14:17:50.665854       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1102 14:17:50.728243       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1102 14:17:50.774262       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [5f1604a94619feed1ff5c1e5544c063d94fd2c34695705ec31d387e5efb1050c] <==
	I1102 14:17:50.433688       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"newest-cni-352233\" does not exist"
	I1102 14:17:50.442086       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1102 14:17:50.434024       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1102 14:17:50.442074       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1102 14:17:50.442079       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1102 14:17:50.452863       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1102 14:17:50.454524       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1102 14:17:50.454565       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1102 14:17:50.454581       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1102 14:17:50.454586       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1102 14:17:50.454591       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1102 14:17:50.462726       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1102 14:17:50.462805       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1102 14:17:50.474792       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 14:17:50.475455       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1102 14:17:50.475545       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1102 14:17:50.475561       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1102 14:17:50.475582       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1102 14:17:50.482931       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1102 14:17:50.501645       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 14:17:50.518955       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1102 14:17:50.530873       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1102 14:17:50.530981       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1102 14:17:50.531933       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1102 14:17:50.546605       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	
	
	==> kube-proxy [0758ad546409c66b98af0fa14dc53e3893a7ce5029245e7776e9ae8c51ed9fde] <==
	I1102 14:17:46.839524       1 server_linux.go:53] "Using iptables proxy"
	I1102 14:17:47.104579       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1102 14:17:47.305238       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1102 14:17:47.305276       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1102 14:17:47.305355       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1102 14:17:47.594216       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1102 14:17:47.594851       1 server_linux.go:132] "Using iptables Proxier"
	I1102 14:17:47.736871       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1102 14:17:47.742027       1 server.go:527] "Version info" version="v1.34.1"
	I1102 14:17:47.765830       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 14:17:47.767401       1 config.go:200] "Starting service config controller"
	I1102 14:17:47.791634       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1102 14:17:47.791655       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1102 14:17:47.768087       1 config.go:309] "Starting node config controller"
	I1102 14:17:47.791695       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1102 14:17:47.791701       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1102 14:17:47.767696       1 config.go:106] "Starting endpoint slice config controller"
	I1102 14:17:47.791708       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1102 14:17:47.791714       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1102 14:17:47.767710       1 config.go:403] "Starting serviceCIDR config controller"
	I1102 14:17:47.791768       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1102 14:17:47.791772       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [e7f0e8be97d4590df69fdb907f0a84b9f17b797f501da038a7715f96824ef2cb] <==
	I1102 14:17:41.277909       1 serving.go:386] Generated self-signed cert in-memory
	W1102 14:17:45.442113       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1102 14:17:45.442152       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1102 14:17:45.442162       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1102 14:17:45.442170       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1102 14:17:45.876343       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1102 14:17:45.878128       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1102 14:17:45.897059       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1102 14:17:45.897184       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 14:17:45.897200       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1102 14:17:45.897220       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1102 14:17:45.998687       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1102 14:17:46.039980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1102 14:17:46.040094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1102 14:17:46.040174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1102 14:17:46.040244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1102 14:17:46.076167       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	
	
	==> kubelet <==
	Nov 02 14:17:45 newest-cni-352233 kubelet[757]: I1102 14:17:45.502894     757 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-352233"
	Nov 02 14:17:45 newest-cni-352233 kubelet[757]: I1102 14:17:45.893851     757 apiserver.go:52] "Watching apiserver"
	Nov 02 14:17:46 newest-cni-352233 kubelet[757]: I1102 14:17:46.110311     757 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 02 14:17:46 newest-cni-352233 kubelet[757]: I1102 14:17:46.156224     757 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-352233"
	Nov 02 14:17:46 newest-cni-352233 kubelet[757]: I1102 14:17:46.156336     757 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-352233"
	Nov 02 14:17:46 newest-cni-352233 kubelet[757]: I1102 14:17:46.156379     757 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 02 14:17:46 newest-cni-352233 kubelet[757]: I1102 14:17:46.159004     757 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 02 14:17:46 newest-cni-352233 kubelet[757]: I1102 14:17:46.178856     757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/380d63bc-7a9c-4abb-9747-04c37075e8b0-lib-modules\") pod \"kindnet-g4hrl\" (UID: \"380d63bc-7a9c-4abb-9747-04c37075e8b0\") " pod="kube-system/kindnet-g4hrl"
	Nov 02 14:17:46 newest-cni-352233 kubelet[757]: I1102 14:17:46.178932     757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2cec75f2-36fd-49c7-8644-941b68023b1b-xtables-lock\") pod \"kube-proxy-vbc2x\" (UID: \"2cec75f2-36fd-49c7-8644-941b68023b1b\") " pod="kube-system/kube-proxy-vbc2x"
	Nov 02 14:17:46 newest-cni-352233 kubelet[757]: I1102 14:17:46.178970     757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/380d63bc-7a9c-4abb-9747-04c37075e8b0-cni-cfg\") pod \"kindnet-g4hrl\" (UID: \"380d63bc-7a9c-4abb-9747-04c37075e8b0\") " pod="kube-system/kindnet-g4hrl"
	Nov 02 14:17:46 newest-cni-352233 kubelet[757]: I1102 14:17:46.179023     757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2cec75f2-36fd-49c7-8644-941b68023b1b-lib-modules\") pod \"kube-proxy-vbc2x\" (UID: \"2cec75f2-36fd-49c7-8644-941b68023b1b\") " pod="kube-system/kube-proxy-vbc2x"
	Nov 02 14:17:46 newest-cni-352233 kubelet[757]: I1102 14:17:46.179048     757 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/380d63bc-7a9c-4abb-9747-04c37075e8b0-xtables-lock\") pod \"kindnet-g4hrl\" (UID: \"380d63bc-7a9c-4abb-9747-04c37075e8b0\") " pod="kube-system/kindnet-g4hrl"
	Nov 02 14:17:46 newest-cni-352233 kubelet[757]: E1102 14:17:46.199802     757 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-352233\" already exists" pod="kube-system/etcd-newest-cni-352233"
	Nov 02 14:17:46 newest-cni-352233 kubelet[757]: I1102 14:17:46.199988     757 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-352233"
	Nov 02 14:17:46 newest-cni-352233 kubelet[757]: E1102 14:17:46.301587     757 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-352233\" already exists" pod="kube-system/kube-apiserver-newest-cni-352233"
	Nov 02 14:17:46 newest-cni-352233 kubelet[757]: I1102 14:17:46.301629     757 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-352233"
	Nov 02 14:17:46 newest-cni-352233 kubelet[757]: I1102 14:17:46.302049     757 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 02 14:17:46 newest-cni-352233 kubelet[757]: E1102 14:17:46.451871     757 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-352233\" already exists" pod="kube-system/kube-controller-manager-newest-cni-352233"
	Nov 02 14:17:46 newest-cni-352233 kubelet[757]: I1102 14:17:46.451904     757 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-352233"
	Nov 02 14:17:46 newest-cni-352233 kubelet[757]: W1102 14:17:46.490190     757 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3dedeeb54f374fa01ce72bc2b41c87fc231415be9be044642fbcda8c122ba0ff/crio-e1fa8f5fb4b1cea176ee321c281d52ebda8ad208bb3aa0d83fd08aadfcbaa32b WatchSource:0}: Error finding container e1fa8f5fb4b1cea176ee321c281d52ebda8ad208bb3aa0d83fd08aadfcbaa32b: Status 404 returned error can't find the container with id e1fa8f5fb4b1cea176ee321c281d52ebda8ad208bb3aa0d83fd08aadfcbaa32b
	Nov 02 14:17:46 newest-cni-352233 kubelet[757]: W1102 14:17:46.565296     757 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/3dedeeb54f374fa01ce72bc2b41c87fc231415be9be044642fbcda8c122ba0ff/crio-3f7e331cfbe6b091e6692d5baee4cc035411be27915fee94754d121b45f88a60 WatchSource:0}: Error finding container 3f7e331cfbe6b091e6692d5baee4cc035411be27915fee94754d121b45f88a60: Status 404 returned error can't find the container with id 3f7e331cfbe6b091e6692d5baee4cc035411be27915fee94754d121b45f88a60
	Nov 02 14:17:46 newest-cni-352233 kubelet[757]: E1102 14:17:46.584003     757 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-352233\" already exists" pod="kube-system/kube-scheduler-newest-cni-352233"
	Nov 02 14:17:50 newest-cni-352233 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 02 14:17:50 newest-cni-352233 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 02 14:17:50 newest-cni-352233 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-352233 -n newest-cni-352233
E1102 14:17:56.636764  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-352233 -n newest-cni-352233: exit status 2 (451.052347ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-352233 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-g4hfq storage-provisioner dashboard-metrics-scraper-6ffb444bf9-54n8n kubernetes-dashboard-855c9754f9-6b9k6
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-352233 describe pod coredns-66bc5c9577-g4hfq storage-provisioner dashboard-metrics-scraper-6ffb444bf9-54n8n kubernetes-dashboard-855c9754f9-6b9k6
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-352233 describe pod coredns-66bc5c9577-g4hfq storage-provisioner dashboard-metrics-scraper-6ffb444bf9-54n8n kubernetes-dashboard-855c9754f9-6b9k6: exit status 1 (112.385722ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-g4hfq" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-54n8n" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-6b9k6" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-352233 describe pod coredns-66bc5c9577-g4hfq storage-provisioner dashboard-metrics-scraper-6ffb444bf9-54n8n kubernetes-dashboard-855c9754f9-6b9k6: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (7.67s)
E1102 14:24:05.434323  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/auto-143736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:24:05.440659  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/auto-143736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:24:05.451958  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/auto-143736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:24:05.473299  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/auto-143736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:24:05.514604  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/auto-143736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:24:05.596010  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/auto-143736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:24:05.757619  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/auto-143736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:24:06.079385  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/auto-143736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:24:06.721377  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/auto-143736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:24:08.003409  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/auto-143736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:24:10.565728  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/auto-143736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:24:15.687567  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/auto-143736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:24:25.187049  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/kindnet-143736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:24:25.193558  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/kindnet-143736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:24:25.205097  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/kindnet-143736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:24:25.226558  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/kindnet-143736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:24:25.268081  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/kindnet-143736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:24:25.349610  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/kindnet-143736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:24:25.511394  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/kindnet-143736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:24:25.833176  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/kindnet-143736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:24:25.929764  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/auto-143736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:24:26.474603  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/kindnet-143736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:24:27.756032  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/kindnet-143736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:24:30.317966  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/kindnet-143736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:24:35.440075  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/kindnet-143736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:24:45.682202  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/kindnet-143736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:24:46.411190  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/auto-143736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (259/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 10.55
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.19
9 TestDownloadOnly/v1.28.0/DeleteAll 0.39
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.24
12 TestDownloadOnly/v1.34.1/json-events 4.96
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.09
18 TestDownloadOnly/v1.34.1/DeleteAll 0.22
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.59
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 173.39
31 TestAddons/serial/GCPAuth/Namespaces 0.18
32 TestAddons/serial/GCPAuth/FakeCredentials 8.81
48 TestAddons/StoppedEnableDisable 12.54
49 TestCertOptions 38.34
50 TestCertExpiration 338.78
52 TestForceSystemdFlag 41.21
53 TestForceSystemdEnv 44.7
58 TestErrorSpam/setup 32
59 TestErrorSpam/start 0.77
60 TestErrorSpam/status 1.1
61 TestErrorSpam/pause 5.67
62 TestErrorSpam/unpause 5.46
63 TestErrorSpam/stop 1.52
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 80.81
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 29.45
70 TestFunctional/serial/KubeContext 0.07
71 TestFunctional/serial/KubectlGetPods 0.11
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.44
75 TestFunctional/serial/CacheCmd/cache/add_local 1.09
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.84
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.15
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 33.97
84 TestFunctional/serial/ComponentHealth 0.11
85 TestFunctional/serial/LogsCmd 1.51
86 TestFunctional/serial/LogsFileCmd 1.55
87 TestFunctional/serial/InvalidService 4.29
89 TestFunctional/parallel/ConfigCmd 0.52
90 TestFunctional/parallel/DashboardCmd 13.35
91 TestFunctional/parallel/DryRun 0.68
92 TestFunctional/parallel/InternationalLanguage 0.26
93 TestFunctional/parallel/StatusCmd 1.26
98 TestFunctional/parallel/AddonsCmd 0.2
99 TestFunctional/parallel/PersistentVolumeClaim 24.64
101 TestFunctional/parallel/SSHCmd 0.71
102 TestFunctional/parallel/CpCmd 2.09
104 TestFunctional/parallel/FileSync 0.34
105 TestFunctional/parallel/CertSync 1.92
109 TestFunctional/parallel/NodeLabels 0.09
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.75
113 TestFunctional/parallel/License 0.33
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.61
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.35
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
127 TestFunctional/parallel/ProfileCmd/profile_list 0.45
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
129 TestFunctional/parallel/MountCmd/any-port 7.98
130 TestFunctional/parallel/MountCmd/specific-port 2.01
131 TestFunctional/parallel/MountCmd/VerifyCleanup 1.57
132 TestFunctional/parallel/ServiceCmd/List 0.62
133 TestFunctional/parallel/ServiceCmd/JSONOutput 0.61
137 TestFunctional/parallel/Version/short 0.07
138 TestFunctional/parallel/Version/components 0.95
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.32
143 TestFunctional/parallel/ImageCommands/ImageBuild 4.05
144 TestFunctional/parallel/ImageCommands/Setup 0.59
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
155 TestFunctional/delete_echo-server_images 0.05
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.01
162 TestMultiControlPlane/serial/StartCluster 176.56
163 TestMultiControlPlane/serial/DeployApp 6.95
164 TestMultiControlPlane/serial/PingHostFromPods 1.6
165 TestMultiControlPlane/serial/AddWorkerNode 60.77
166 TestMultiControlPlane/serial/NodeLabels 0.11
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.07
168 TestMultiControlPlane/serial/CopyFile 19.93
169 TestMultiControlPlane/serial/StopSecondaryNode 12.83
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.78
171 TestMultiControlPlane/serial/RestartSecondaryNode 28.11
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.24
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 126.73
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.76
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.8
176 TestMultiControlPlane/serial/StopCluster 36.04
177 TestMultiControlPlane/serial/RestartCluster 94.08
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.76
179 TestMultiControlPlane/serial/AddSecondaryNode 81.57
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.04
185 TestJSONOutput/start/Command 79.62
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.85
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 39.93
211 TestKicCustomNetwork/use_default_bridge_network 39.96
212 TestKicExistingNetwork 35.93
213 TestKicCustomSubnet 39.32
214 TestKicStaticIP 35.9
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 73.5
219 TestMountStart/serial/StartWithMountFirst 7.13
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 9.81
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.73
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.29
226 TestMountStart/serial/RestartStopped 8.01
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 141
231 TestMultiNode/serial/DeployApp2Nodes 4.82
232 TestMultiNode/serial/PingHostFrom2Pods 0.92
233 TestMultiNode/serial/AddNode 57.74
234 TestMultiNode/serial/MultiNodeLabels 0.1
235 TestMultiNode/serial/ProfileList 0.71
236 TestMultiNode/serial/CopyFile 10.45
237 TestMultiNode/serial/StopNode 2.53
238 TestMultiNode/serial/StartAfterStop 8.85
239 TestMultiNode/serial/RestartKeepsNodes 80.75
240 TestMultiNode/serial/DeleteNode 5.71
241 TestMultiNode/serial/StopMultiNode 23.97
242 TestMultiNode/serial/RestartMultiNode 51.31
243 TestMultiNode/serial/ValidateNameConflict 37.74
248 TestPreload 126.33
253 TestInsufficientStorage 13.95
254 TestRunningBinaryUpgrade 55.92
256 TestKubernetesUpgrade 215.37
257 TestMissingContainerUpgrade 114.05
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
260 TestNoKubernetes/serial/StartWithK8s 48.69
261 TestNoKubernetes/serial/StartWithStopK8s 30.37
262 TestNoKubernetes/serial/Start 10.81
263 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
264 TestNoKubernetes/serial/ProfileList 1
265 TestNoKubernetes/serial/Stop 1.29
266 TestNoKubernetes/serial/StartNoArgs 8.6
267 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.37
268 TestStoppedBinaryUpgrade/Setup 0.76
269 TestStoppedBinaryUpgrade/Upgrade 66.3
270 TestStoppedBinaryUpgrade/MinikubeLogs 1.66
279 TestPause/serial/Start 86.45
280 TestPause/serial/SecondStartNoReconfiguration 40.42
288 TestNetworkPlugins/group/false 3.85
294 TestStartStop/group/old-k8s-version/serial/FirstStart 61.77
295 TestStartStop/group/old-k8s-version/serial/DeployApp 9.42
297 TestStartStop/group/old-k8s-version/serial/Stop 12.08
298 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
299 TestStartStop/group/old-k8s-version/serial/SecondStart 53.21
300 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
301 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
302 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
305 TestStartStop/group/no-preload/serial/FirstStart 63.43
306 TestStartStop/group/no-preload/serial/DeployApp 10.3
308 TestStartStop/group/no-preload/serial/Stop 12.01
309 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
310 TestStartStop/group/no-preload/serial/SecondStart 55.29
312 TestStartStop/group/embed-certs/serial/FirstStart 86.41
313 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
314 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
315 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 83.3
319 TestStartStop/group/embed-certs/serial/DeployApp 8.38
321 TestStartStop/group/embed-certs/serial/Stop 12.04
322 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
323 TestStartStop/group/embed-certs/serial/SecondStart 54.83
324 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.35
326 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.04
327 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.03
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 56.68
330 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.15
331 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.32
334 TestStartStop/group/newest-cni/serial/FirstStart 44.58
335 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
336 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.18
337 TestStartStop/group/newest-cni/serial/DeployApp 0
339 TestStartStop/group/newest-cni/serial/Stop 1.36
340 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
341 TestStartStop/group/newest-cni/serial/SecondStart 19.76
342 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.35
344 TestNetworkPlugins/group/auto/Start 84.25
345 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
346 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.35
349 TestNetworkPlugins/group/kindnet/Start 85.38
350 TestNetworkPlugins/group/auto/KubeletFlags 0.31
351 TestNetworkPlugins/group/auto/NetCatPod 10.29
352 TestNetworkPlugins/group/auto/DNS 0.16
353 TestNetworkPlugins/group/auto/Localhost 0.14
354 TestNetworkPlugins/group/auto/HairPin 0.14
355 TestNetworkPlugins/group/kindnet/ControllerPod 6
356 TestNetworkPlugins/group/kindnet/KubeletFlags 0.35
357 TestNetworkPlugins/group/kindnet/NetCatPod 12.36
358 TestNetworkPlugins/group/calico/Start 71.24
359 TestNetworkPlugins/group/kindnet/DNS 0.18
360 TestNetworkPlugins/group/kindnet/Localhost 0.24
361 TestNetworkPlugins/group/kindnet/HairPin 0.25
362 TestNetworkPlugins/group/custom-flannel/Start 72.05
363 TestNetworkPlugins/group/calico/ControllerPod 6.01
364 TestNetworkPlugins/group/calico/KubeletFlags 0.47
365 TestNetworkPlugins/group/calico/NetCatPod 12.5
366 TestNetworkPlugins/group/calico/DNS 0.16
367 TestNetworkPlugins/group/calico/Localhost 0.14
368 TestNetworkPlugins/group/calico/HairPin 0.14
369 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.43
370 TestNetworkPlugins/group/custom-flannel/NetCatPod 13.37
371 TestNetworkPlugins/group/enable-default-cni/Start 85.16
372 TestNetworkPlugins/group/custom-flannel/DNS 0.21
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
375 TestNetworkPlugins/group/flannel/Start 61.62
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
377 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.27
378 TestNetworkPlugins/group/flannel/ControllerPod 6.01
379 TestNetworkPlugins/group/flannel/KubeletFlags 0.42
380 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
381 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
382 TestNetworkPlugins/group/flannel/NetCatPod 10.41
383 TestNetworkPlugins/group/enable-default-cni/HairPin 0.21
384 TestNetworkPlugins/group/flannel/DNS 0.21
385 TestNetworkPlugins/group/flannel/Localhost 0.15
386 TestNetworkPlugins/group/flannel/HairPin 0.19
387 TestNetworkPlugins/group/bridge/Start 85.13
388 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
389 TestNetworkPlugins/group/bridge/NetCatPod 10.26
390 TestNetworkPlugins/group/bridge/DNS 0.15
391 TestNetworkPlugins/group/bridge/Localhost 0.12
392 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.28.0/json-events (10.55s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-390798 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-390798 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (10.552196775s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (10.55s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1102 13:12:57.322379  295174 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1102 13:12:57.322458  295174 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-390798
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-390798: exit status 85 (192.172353ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-390798 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-390798 │ jenkins │ v1.37.0 │ 02 Nov 25 13:12 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 13:12:46
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 13:12:46.811962  295179 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:12:46.812098  295179 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:12:46.812110  295179 out.go:374] Setting ErrFile to fd 2...
	I1102 13:12:46.812116  295179 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:12:46.812487  295179 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	W1102 13:12:46.812664  295179 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21808-293314/.minikube/config/config.json: open /home/jenkins/minikube-integration/21808-293314/.minikube/config/config.json: no such file or directory
	I1102 13:12:46.813676  295179 out.go:368] Setting JSON to true
	I1102 13:12:46.814504  295179 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6919,"bootTime":1762082248,"procs":158,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1102 13:12:46.814576  295179 start.go:143] virtualization:  
	I1102 13:12:46.818631  295179 out.go:99] [download-only-390798] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1102 13:12:46.818908  295179 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball: no such file or directory
	I1102 13:12:46.818981  295179 notify.go:221] Checking for updates...
	I1102 13:12:46.821618  295179 out.go:171] MINIKUBE_LOCATION=21808
	I1102 13:12:46.824821  295179 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 13:12:46.827733  295179 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21808-293314/kubeconfig
	I1102 13:12:46.830846  295179 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-293314/.minikube
	I1102 13:12:46.833834  295179 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1102 13:12:46.839529  295179 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1102 13:12:46.839892  295179 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 13:12:46.869540  295179 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1102 13:12:46.869675  295179 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:12:46.927056  295179 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-02 13:12:46.9174344 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 13:12:46.927164  295179 docker.go:319] overlay module found
	I1102 13:12:46.930147  295179 out.go:99] Using the docker driver based on user configuration
	I1102 13:12:46.930186  295179 start.go:309] selected driver: docker
	I1102 13:12:46.930193  295179 start.go:930] validating driver "docker" against <nil>
	I1102 13:12:46.930314  295179 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:12:46.981684  295179 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-02 13:12:46.972803252 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 13:12:46.981848  295179 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1102 13:12:46.982119  295179 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1102 13:12:46.982279  295179 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1102 13:12:46.985398  295179 out.go:171] Using Docker driver with root privileges
	I1102 13:12:46.988321  295179 cni.go:84] Creating CNI manager for ""
	I1102 13:12:46.988386  295179 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1102 13:12:46.988399  295179 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1102 13:12:46.988486  295179 start.go:353] cluster config:
	{Name:download-only-390798 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-390798 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:12:46.991445  295179 out.go:99] Starting "download-only-390798" primary control-plane node in "download-only-390798" cluster
	I1102 13:12:46.991464  295179 cache.go:124] Beginning downloading kic base image for docker with crio
	I1102 13:12:46.994191  295179 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1102 13:12:46.994225  295179 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1102 13:12:46.994393  295179 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1102 13:12:47.011101  295179 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1102 13:12:47.011890  295179 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1102 13:12:47.011996  295179 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1102 13:12:47.055637  295179 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1102 13:12:47.055664  295179 cache.go:59] Caching tarball of preloaded images
	I1102 13:12:47.055827  295179 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1102 13:12:47.060309  295179 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1102 13:12:47.060337  295179 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1102 13:12:47.145196  295179 preload.go:290] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1102 13:12:47.145321  295179 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1102 13:12:50.694182  295179 cache.go:62] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1102 13:12:50.694563  295179 profile.go:143] Saving config to /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/download-only-390798/config.json ...
	I1102 13:12:50.694597  295179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/download-only-390798/config.json: {Name:mk7dcf3ddc53bae1c0f45e324afd85a74d7ca957 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1102 13:12:50.694801  295179 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1102 13:12:50.695704  295179 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21808-293314/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-390798 host does not exist
	  To start a cluster, run: "minikube start -p download-only-390798"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-390798
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (4.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-741875 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-741875 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.963824927s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (4.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1102 13:13:03.114896  295174 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1102 13:13:03.114933  295174 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21808-293314/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-741875
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-741875: exit status 85 (93.144782ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-390798 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-390798 │ jenkins │ v1.37.0 │ 02 Nov 25 13:12 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 02 Nov 25 13:12 UTC │ 02 Nov 25 13:12 UTC │
	│ delete  │ -p download-only-390798                                                                                                                                                   │ download-only-390798 │ jenkins │ v1.37.0 │ 02 Nov 25 13:12 UTC │ 02 Nov 25 13:12 UTC │
	│ start   │ -o=json --download-only -p download-only-741875 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-741875 │ jenkins │ v1.37.0 │ 02 Nov 25 13:12 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/02 13:12:58
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1102 13:12:58.211330  295383 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:12:58.211480  295383 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:12:58.211519  295383 out.go:374] Setting ErrFile to fd 2...
	I1102 13:12:58.211530  295383 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:12:58.211809  295383 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 13:12:58.212229  295383 out.go:368] Setting JSON to true
	I1102 13:12:58.213097  295383 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6931,"bootTime":1762082248,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1102 13:12:58.213170  295383 start.go:143] virtualization:  
	I1102 13:12:58.238932  295383 out.go:99] [download-only-741875] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1102 13:12:58.239950  295383 notify.go:221] Checking for updates...
	I1102 13:12:58.272497  295383 out.go:171] MINIKUBE_LOCATION=21808
	I1102 13:12:58.319536  295383 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 13:12:58.351934  295383 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21808-293314/kubeconfig
	I1102 13:12:58.385164  295383 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-293314/.minikube
	I1102 13:12:58.417433  295383 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1102 13:12:58.496134  295383 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1102 13:12:58.496468  295383 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 13:12:58.519460  295383 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1102 13:12:58.519579  295383 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:12:58.576971  295383 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-02 13:12:58.56793631 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 13:12:58.577086  295383 docker.go:319] overlay module found
	I1102 13:12:58.624344  295383 out.go:99] Using the docker driver based on user configuration
	I1102 13:12:58.624395  295383 start.go:309] selected driver: docker
	I1102 13:12:58.624403  295383 start.go:930] validating driver "docker" against <nil>
	I1102 13:12:58.624537  295383 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:12:58.685301  295383 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-02 13:12:58.675853713 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 13:12:58.685467  295383 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1102 13:12:58.685753  295383 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1102 13:12:58.685920  295383 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1102 13:12:58.731809  295383 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-741875 host does not exist
	  To start a cluster, run: "minikube start -p download-only-741875"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-741875
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I1102 13:13:04.278693  295174 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-605864 --alsologtostderr --binary-mirror http://127.0.0.1:39709 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-605864" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-605864
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-230560
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-230560: exit status 85 (79.38102ms)

                                                
                                                
-- stdout --
	* Profile "addons-230560" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-230560"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-230560
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-230560: exit status 85 (76.928999ms)

                                                
                                                
-- stdout --
	* Profile "addons-230560" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-230560"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (173.39s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-230560 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-230560 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m53.388716134s)
--- PASS: TestAddons/Setup (173.39s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-230560 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-230560 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.81s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-230560 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-230560 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e44da318-9eb6-4f4c-971d-b08f91cec38e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e44da318-9eb6-4f4c-971d-b08f91cec38e] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003743624s
addons_test.go:694: (dbg) Run:  kubectl --context addons-230560 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-230560 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-230560 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-230560 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.81s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.54s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-230560
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-230560: (12.248801012s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-230560
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-230560
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-230560
--- PASS: TestAddons/StoppedEnableDisable (12.54s)

                                                
                                    
x
+
TestCertOptions (38.34s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-935084 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-935084 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (35.46320218s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-935084 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-935084 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-935084 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-935084" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-935084
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-935084: (2.149595514s)
--- PASS: TestCertOptions (38.34s)

                                                
                                    
x
+
TestCertExpiration (338.78s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-114321 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-114321 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (42.912464715s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-114321 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-114321 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (1m52.630139469s)
helpers_test.go:175: Cleaning up "cert-expiration-114321" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-114321
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-114321: (3.238001273s)
--- PASS: TestCertExpiration (338.78s)

                                                
                                    
x
+
TestForceSystemdFlag (41.21s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-518329 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-518329 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (38.350557496s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-518329 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-518329" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-518329
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-518329: (2.532434463s)
--- PASS: TestForceSystemdFlag (41.21s)

                                                
                                    
x
+
TestForceSystemdEnv (44.7s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-263133 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-263133 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (41.722264662s)
helpers_test.go:175: Cleaning up "force-systemd-env-263133" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-263133
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-263133: (2.980474309s)
--- PASS: TestForceSystemdEnv (44.70s)

                                                
                                    
x
+
TestErrorSpam/setup (32s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-942344 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-942344 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-942344 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-942344 --driver=docker  --container-runtime=crio: (32.001811314s)
--- PASS: TestErrorSpam/setup (32.00s)

                                                
                                    
x
+
TestErrorSpam/start (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-942344 --log_dir /tmp/nospam-942344 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-942344 --log_dir /tmp/nospam-942344 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-942344 --log_dir /tmp/nospam-942344 start --dry-run
--- PASS: TestErrorSpam/start (0.77s)

                                                
                                    
x
+
TestErrorSpam/status (1.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-942344 --log_dir /tmp/nospam-942344 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-942344 --log_dir /tmp/nospam-942344 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-942344 --log_dir /tmp/nospam-942344 status
--- PASS: TestErrorSpam/status (1.10s)

                                                
                                    
x
+
TestErrorSpam/pause (5.67s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-942344 --log_dir /tmp/nospam-942344 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-942344 --log_dir /tmp/nospam-942344 pause: exit status 80 (2.360064249s)

                                                
                                                
-- stdout --
	* Pausing node nospam-942344 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:19:55Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-942344 --log_dir /tmp/nospam-942344 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-942344 --log_dir /tmp/nospam-942344 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-942344 --log_dir /tmp/nospam-942344 pause: exit status 80 (1.648280166s)

                                                
                                                
-- stdout --
	* Pausing node nospam-942344 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:19:57Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-942344 --log_dir /tmp/nospam-942344 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-942344 --log_dir /tmp/nospam-942344 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-942344 --log_dir /tmp/nospam-942344 pause: exit status 80 (1.658790276s)

                                                
                                                
-- stdout --
	* Pausing node nospam-942344 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:19:59Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-942344 --log_dir /tmp/nospam-942344 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.67s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.46s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-942344 --log_dir /tmp/nospam-942344 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-942344 --log_dir /tmp/nospam-942344 unpause: exit status 80 (2.193781611s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-942344 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:20:01Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-942344 --log_dir /tmp/nospam-942344 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-942344 --log_dir /tmp/nospam-942344 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-942344 --log_dir /tmp/nospam-942344 unpause: exit status 80 (1.722597314s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-942344 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:20:03Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-942344 --log_dir /tmp/nospam-942344 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-942344 --log_dir /tmp/nospam-942344 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-942344 --log_dir /tmp/nospam-942344 unpause: exit status 80 (1.546279161s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-942344 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-02T13:20:04Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-942344 --log_dir /tmp/nospam-942344 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.46s)

                                                
                                    
x
+
TestErrorSpam/stop (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-942344 --log_dir /tmp/nospam-942344 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-942344 --log_dir /tmp/nospam-942344 stop: (1.315456393s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-942344 --log_dir /tmp/nospam-942344 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-942344 --log_dir /tmp/nospam-942344 stop
--- PASS: TestErrorSpam/stop (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21808-293314/.minikube/files/etc/test/nested/copy/295174/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (80.81s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-082350 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1102 13:20:59.515134  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:20:59.523007  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:20:59.534775  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:20:59.556235  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:20:59.597750  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:20:59.679305  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:20:59.840913  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:21:00.162527  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:21:00.804123  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:21:02.085809  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:21:04.647360  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:21:09.769348  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:21:20.010831  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-082350 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m20.805665736s)
--- PASS: TestFunctional/serial/StartWithProxy (80.81s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (29.45s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1102 13:21:31.168126  295174 config.go:182] Loaded profile config "functional-082350": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-082350 --alsologtostderr -v=8
E1102 13:21:40.492345  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-082350 --alsologtostderr -v=8: (29.451522766s)
functional_test.go:678: soft start took 29.452058247s for "functional-082350" cluster.
I1102 13:22:00.620010  295174 config.go:182] Loaded profile config "functional-082350": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (29.45s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-082350 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-082350 cache add registry.k8s.io/pause:3.1: (1.126827839s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-082350 cache add registry.k8s.io/pause:3.3: (1.165455852s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-082350 cache add registry.k8s.io/pause:latest: (1.14279392s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-082350 /tmp/TestFunctionalserialCacheCmdcacheadd_local3876361858/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 cache add minikube-local-cache-test:functional-082350
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 cache delete minikube-local-cache-test:functional-082350
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-082350
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.84s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-082350 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (303.785936ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.84s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 kubectl -- --context functional-082350 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-082350 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.97s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-082350 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1102 13:22:21.453700  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-082350 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.970887891s)
functional_test.go:776: restart took 33.970987691s for "functional-082350" cluster.
I1102 13:22:41.956185  295174 config.go:182] Loaded profile config "functional-082350": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (33.97s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-082350 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-082350 logs: (1.510362325s)
--- PASS: TestFunctional/serial/LogsCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 logs --file /tmp/TestFunctionalserialLogsFileCmd2851436570/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-082350 logs --file /tmp/TestFunctionalserialLogsFileCmd2851436570/001/logs.txt: (1.542784039s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.55s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.29s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-082350 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-082350
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-082350: exit status 115 (379.533171ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30711 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-082350 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.29s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-082350 config get cpus: exit status 14 (89.764519ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-082350 config get cpus: exit status 14 (92.103198ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-082350 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-082350 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 321761: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.35s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-082350 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-082350 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (307.449223ms)

                                                
                                                
-- stdout --
	* [functional-082350] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21808
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21808-293314/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-293314/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 13:33:17.597937  321172 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:33:17.598101  321172 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:33:17.598113  321172 out.go:374] Setting ErrFile to fd 2...
	I1102 13:33:17.598119  321172 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:33:17.598367  321172 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 13:33:17.598800  321172 out.go:368] Setting JSON to false
	I1102 13:33:17.599877  321172 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8150,"bootTime":1762082248,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1102 13:33:17.599968  321172 start.go:143] virtualization:  
	I1102 13:33:17.603191  321172 out.go:179] * [functional-082350] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1102 13:33:17.607307  321172 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 13:33:17.607366  321172 notify.go:221] Checking for updates...
	I1102 13:33:17.613697  321172 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 13:33:17.616560  321172 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-293314/kubeconfig
	I1102 13:33:17.619454  321172 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-293314/.minikube
	I1102 13:33:17.622362  321172 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1102 13:33:17.627610  321172 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 13:33:17.633219  321172 config.go:182] Loaded profile config "functional-082350": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:33:17.633775  321172 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 13:33:17.696708  321172 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1102 13:33:17.696839  321172 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:33:17.794504  321172 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-02 13:33:17.784257435 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 13:33:17.794790  321172 docker.go:319] overlay module found
	I1102 13:33:17.799813  321172 out.go:179] * Using the docker driver based on existing profile
	I1102 13:33:17.802670  321172 start.go:309] selected driver: docker
	I1102 13:33:17.802685  321172 start.go:930] validating driver "docker" against &{Name:functional-082350 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-082350 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:33:17.802797  321172 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 13:33:17.806400  321172 out.go:203] 
	W1102 13:33:17.809219  321172 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1102 13:33:17.812020  321172 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-082350 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-082350 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-082350 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (258.00379ms)

                                                
                                                
-- stdout --
	* [functional-082350] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21808
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21808-293314/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-293314/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 13:33:17.319986  321100 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:33:17.320196  321100 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:33:17.320205  321100 out.go:374] Setting ErrFile to fd 2...
	I1102 13:33:17.320210  321100 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:33:17.320607  321100 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 13:33:17.320987  321100 out.go:368] Setting JSON to false
	I1102 13:33:17.322051  321100 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":8150,"bootTime":1762082248,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1102 13:33:17.322122  321100 start.go:143] virtualization:  
	I1102 13:33:17.326573  321100 out.go:179] * [functional-082350] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1102 13:33:17.329716  321100 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 13:33:17.329767  321100 notify.go:221] Checking for updates...
	I1102 13:33:17.337176  321100 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 13:33:17.340193  321100 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-293314/kubeconfig
	I1102 13:33:17.343706  321100 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-293314/.minikube
	I1102 13:33:17.348570  321100 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1102 13:33:17.352077  321100 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 13:33:17.355540  321100 config.go:182] Loaded profile config "functional-082350": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:33:17.356126  321100 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 13:33:17.415008  321100 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1102 13:33:17.415167  321100 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:33:17.488553  321100 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-02 13:33:17.477303119 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 13:33:17.488703  321100 docker.go:319] overlay module found
	I1102 13:33:17.492469  321100 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1102 13:33:17.495915  321100 start.go:309] selected driver: docker
	I1102 13:33:17.495938  321100 start.go:930] validating driver "docker" against &{Name:functional-082350 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-082350 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1102 13:33:17.496044  321100 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 13:33:17.499603  321100 out.go:203] 
	W1102 13:33:17.502401  321100 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1102 13:33:17.505330  321100 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [f7fa5507-74d1-40ec-b17e-a609a9ace2a8] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004122083s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-082350 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-082350 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-082350 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-082350 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [eea31f76-7339-4e8e-bc08-4ba1f351077b] Pending
helpers_test.go:352: "sp-pod" [eea31f76-7339-4e8e-bc08-4ba1f351077b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [eea31f76-7339-4e8e-bc08-4ba1f351077b] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.00337583s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-082350 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-082350 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-082350 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [d6d740d5-6179-4ebf-b512-c08c948a23e4] Pending
helpers_test.go:352: "sp-pod" [d6d740d5-6179-4ebf-b512-c08c948a23e4] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003553227s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-082350 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.64s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 ssh -n functional-082350 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 cp functional-082350:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd672931488/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 ssh -n functional-082350 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 ssh -n functional-082350 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/295174/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 ssh "sudo cat /etc/test/nested/copy/295174/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/295174.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 ssh "sudo cat /etc/ssl/certs/295174.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/295174.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 ssh "sudo cat /usr/share/ca-certificates/295174.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 ssh "sudo cat /etc/ssl/certs/51391683.0"
2025/11/02 13:33:31 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2951742.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 ssh "sudo cat /etc/ssl/certs/2951742.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2951742.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 ssh "sudo cat /usr/share/ca-certificates/2951742.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-082350 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-082350 ssh "sudo systemctl is-active docker": exit status 1 (352.670232ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-082350 ssh "sudo systemctl is-active containerd": exit status 1 (393.999085ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-082350 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-082350 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-082350 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 317686: os: process already finished
helpers_test.go:519: unable to terminate pid 317526: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-082350 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-082350 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-082350 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [d79b327f-bf2a-414e-98d3-92856b07682c] Pending
helpers_test.go:352: "nginx-svc" [d79b327f-bf2a-414e-98d3-92856b07682c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [d79b327f-bf2a-414e-98d3-92856b07682c] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004166835s
I1102 13:22:58.891733  295174 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-082350 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.69.142 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-082350 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "381.828626ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "67.08435ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "367.633826ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "71.323767ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-082350 /tmp/TestFunctionalparallelMountCmdany-port2544191393/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1762090384450304129" to /tmp/TestFunctionalparallelMountCmdany-port2544191393/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1762090384450304129" to /tmp/TestFunctionalparallelMountCmdany-port2544191393/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1762090384450304129" to /tmp/TestFunctionalparallelMountCmdany-port2544191393/001/test-1762090384450304129
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-082350 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (367.248894ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1102 13:33:04.817804  295174 retry.go:31] will retry after 547.231881ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  2 13:33 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  2 13:33 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  2 13:33 test-1762090384450304129
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 ssh cat /mount-9p/test-1762090384450304129
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-082350 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [446a10c7-8a8e-4624-a36a-196664fbc75a] Pending
helpers_test.go:352: "busybox-mount" [446a10c7-8a8e-4624-a36a-196664fbc75a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [446a10c7-8a8e-4624-a36a-196664fbc75a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [446a10c7-8a8e-4624-a36a-196664fbc75a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003366912s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-082350 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-082350 /tmp/TestFunctionalparallelMountCmdany-port2544191393/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.98s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-082350 /tmp/TestFunctionalparallelMountCmdspecific-port3449581695/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-082350 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (335.789535ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1102 13:33:12.764701  295174 retry.go:31] will retry after 588.064516ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-082350 /tmp/TestFunctionalparallelMountCmdspecific-port3449581695/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-082350 ssh "sudo umount -f /mount-9p": exit status 1 (299.039551ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-082350 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-082350 /tmp/TestFunctionalparallelMountCmdspecific-port3449581695/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-082350 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3150257547/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-082350 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3150257547/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-082350 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3150257547/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-082350 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-082350 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3150257547/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-082350 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3150257547/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-082350 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3150257547/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 service list -o json
functional_test.go:1504: Took "608.501873ms" to run "out/minikube-linux-arm64 -p functional-082350 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-082350 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-082350 image ls --format short --alsologtostderr:
I1102 13:33:33.516855  323773 out.go:360] Setting OutFile to fd 1 ...
I1102 13:33:33.517113  323773 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1102 13:33:33.517143  323773 out.go:374] Setting ErrFile to fd 2...
I1102 13:33:33.517163  323773 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1102 13:33:33.517459  323773 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
I1102 13:33:33.518238  323773 config.go:182] Loaded profile config "functional-082350": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1102 13:33:33.518407  323773 config.go:182] Loaded profile config "functional-082350": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1102 13:33:33.518951  323773 cli_runner.go:164] Run: docker container inspect functional-082350 --format={{.State.Status}}
I1102 13:33:33.536341  323773 ssh_runner.go:195] Run: systemctl --version
I1102 13:33:33.536390  323773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-082350
I1102 13:33:33.556585  323773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/functional-082350/id_rsa Username:docker}
I1102 13:33:33.673571  323773 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-082350 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/nginx                 │ alpine             │ cbad6347cca28 │ 54.8MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ a1894772a478e │ 206MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ 43911e833d64d │ 84.8MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ docker.io/library/nginx                 │ latest             │ 46fabdd7f288c │ 176MB  │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ 05baa95f5142d │ 75.9MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ 7eb2c6ff0c5a7 │ 72.6MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ b5f57ec6b9867 │ 51.6MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-082350 image ls --format table --alsologtostderr:
I1102 13:33:34.397511  324016 out.go:360] Setting OutFile to fd 1 ...
I1102 13:33:34.397627  324016 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1102 13:33:34.397637  324016 out.go:374] Setting ErrFile to fd 2...
I1102 13:33:34.397643  324016 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1102 13:33:34.397997  324016 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
I1102 13:33:34.399003  324016 config.go:182] Loaded profile config "functional-082350": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1102 13:33:34.399132  324016 config.go:182] Loaded profile config "functional-082350": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1102 13:33:34.399603  324016 cli_runner.go:164] Run: docker container inspect functional-082350 --format={{.State.Status}}
I1102 13:33:34.422132  324016 ssh_runner.go:195] Run: systemctl --version
I1102 13:33:34.422181  324016 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-082350
I1102 13:33:34.443695  324016 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/functional-082350/id_rsa Username:docker}
I1102 13:33:34.553171  324016 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-082350 image ls --format json --alsologtostderr:
[{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54837949"},{"id":"43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902","registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79
645"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"84753391"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"46fabdd7f288c91a57f5d5fe12a02a41fbe855142469fcd50cbe885229064797","repoDigests":["docker.io/library/nginx@sha256:89a1bafe028b2980994d974115ee7268ef851a6eb7c9cb9626d8035b08ba4424","docker.io/library/nginx@sha256:f547e3d0d5d02f7009737b284abc87d808e4252b42dceea361811e9fc606287f"],"repoTags":["docker.io/library/nginx:latest"],"size":"176006680"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikub
e/storage-provisioner:v5"],"size":"29037500"},{"id":"05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6","registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"75938711"},{"id":"b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500","registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"51592017"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd8
54c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.2
8.4-glibc"],"size":"3774172"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f","registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"72629077"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:
3.1"],"size":"528622"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205987068"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-082350 image ls --format json --alsologtostderr:
I1102 13:33:34.115266  323924 out.go:360] Setting OutFile to fd 1 ...
I1102 13:33:34.115423  323924 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1102 13:33:34.115444  323924 out.go:374] Setting ErrFile to fd 2...
I1102 13:33:34.115470  323924 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1102 13:33:34.115749  323924 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
I1102 13:33:34.116428  323924 config.go:182] Loaded profile config "functional-082350": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1102 13:33:34.116588  323924 config.go:182] Loaded profile config "functional-082350": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1102 13:33:34.117103  323924 cli_runner.go:164] Run: docker container inspect functional-082350 --format={{.State.Status}}
I1102 13:33:34.134388  323924 ssh_runner.go:195] Run: systemctl --version
I1102 13:33:34.134454  323924 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-082350
I1102 13:33:34.177079  323924 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/functional-082350/id_rsa Username:docker}
I1102 13:33:34.285626  323924 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-082350 image ls --format yaml --alsologtostderr:
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:5db83f9e7ee85732a647f5cf5fbdf85652afa8561b66c99f20756080ebd82ea5
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205987068"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 46fabdd7f288c91a57f5d5fe12a02a41fbe855142469fcd50cbe885229064797
repoDigests:
- docker.io/library/nginx@sha256:89a1bafe028b2980994d974115ee7268ef851a6eb7c9cb9626d8035b08ba4424
- docker.io/library/nginx@sha256:f547e3d0d5d02f7009737b284abc87d808e4252b42dceea361811e9fc606287f
repoTags:
- docker.io/library/nginx:latest
size: "176006680"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: 05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:90d560a712188ee40c7d03b070c8f2cbcb3097081e62306bc7e68e438cceb9a6
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "75938711"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:7391b3732e7f7ccd23ff1d02fbeadcde496f374d7460ad8e79260f8f6d2c9f90
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54837949"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1276f2ef2e44c06f37d7c3cccaa3f0100d5f4e939e5cfde42343962da346857f
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "72629077"
- id: b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
- registry.k8s.io/kube-scheduler@sha256:d69ae11adb4233d440c302583adee9e3a37cf3626484476fe18ec821953e951e
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "51592017"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
- registry.k8s.io/kube-apiserver@sha256:ffe89a0fe39dd71bb6eee7066c95512bd4a8365cb6df23eaf60e70209fe79645
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "84753391"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-082350 image ls --format yaml --alsologtostderr:
I1102 13:33:33.824155  323852 out.go:360] Setting OutFile to fd 1 ...
I1102 13:33:33.824498  323852 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1102 13:33:33.824513  323852 out.go:374] Setting ErrFile to fd 2...
I1102 13:33:33.824519  323852 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1102 13:33:33.824842  323852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
I1102 13:33:33.825571  323852 config.go:182] Loaded profile config "functional-082350": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1102 13:33:33.825691  323852 config.go:182] Loaded profile config "functional-082350": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1102 13:33:33.826423  323852 cli_runner.go:164] Run: docker container inspect functional-082350 --format={{.State.Status}}
I1102 13:33:33.852635  323852 ssh_runner.go:195] Run: systemctl --version
I1102 13:33:33.852700  323852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-082350
I1102 13:33:33.873311  323852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/functional-082350/id_rsa Username:docker}
I1102 13:33:33.991981  323852 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-082350 ssh pgrep buildkitd: exit status 1 (387.400345ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 image build -t localhost/my-image:functional-082350 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-082350 image build -t localhost/my-image:functional-082350 testdata/build --alsologtostderr: (3.434748608s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-082350 image build -t localhost/my-image:functional-082350 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 7b268c2fcea
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-082350
--> 13a96351e6e
Successfully tagged localhost/my-image:functional-082350
13a96351e6e314653c297e0d6b33b5a6f9c41143b412296d892c1e6e3b52bba3
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-082350 image build -t localhost/my-image:functional-082350 testdata/build --alsologtostderr:
I1102 13:33:34.418840  324021 out.go:360] Setting OutFile to fd 1 ...
I1102 13:33:34.420065  324021 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1102 13:33:34.420080  324021 out.go:374] Setting ErrFile to fd 2...
I1102 13:33:34.420086  324021 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1102 13:33:34.420354  324021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
I1102 13:33:34.421075  324021 config.go:182] Loaded profile config "functional-082350": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1102 13:33:34.421631  324021 config.go:182] Loaded profile config "functional-082350": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1102 13:33:34.422246  324021 cli_runner.go:164] Run: docker container inspect functional-082350 --format={{.State.Status}}
I1102 13:33:34.440355  324021 ssh_runner.go:195] Run: systemctl --version
I1102 13:33:34.440417  324021 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-082350
I1102 13:33:34.467027  324021 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/functional-082350/id_rsa Username:docker}
I1102 13:33:34.584119  324021 build_images.go:162] Building image from path: /tmp/build.2383596117.tar
I1102 13:33:34.584208  324021 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1102 13:33:34.592304  324021 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2383596117.tar
I1102 13:33:34.596292  324021 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2383596117.tar: stat -c "%s %y" /var/lib/minikube/build/build.2383596117.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2383596117.tar': No such file or directory
I1102 13:33:34.596333  324021 ssh_runner.go:362] scp /tmp/build.2383596117.tar --> /var/lib/minikube/build/build.2383596117.tar (3072 bytes)
I1102 13:33:34.627485  324021 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2383596117
I1102 13:33:34.636769  324021 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2383596117 -xf /var/lib/minikube/build/build.2383596117.tar
I1102 13:33:34.646763  324021 crio.go:315] Building image: /var/lib/minikube/build/build.2383596117
I1102 13:33:34.646838  324021 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-082350 /var/lib/minikube/build/build.2383596117 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1102 13:33:37.763773  324021 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-082350 /var/lib/minikube/build/build.2383596117 --cgroup-manager=cgroupfs: (3.116906182s)
I1102 13:33:37.763859  324021 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2383596117
I1102 13:33:37.771661  324021 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2383596117.tar
I1102 13:33:37.780187  324021 build_images.go:218] Built localhost/my-image:functional-082350 from /tmp/build.2383596117.tar
I1102 13:33:37.780225  324021 build_images.go:134] succeeded building to: functional-082350
I1102 13:33:37.780230  324021 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-082350
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 image rm kicbase/echo-server:functional-082350 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-082350 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-082350
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-082350
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-082350
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (176.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1102 13:35:59.510901  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-367047 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m55.666797164s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (176.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-367047 kubectl -- rollout status deployment/busybox: (4.191262541s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 kubectl -- exec busybox-7b57f96db7-l4b7j -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 kubectl -- exec busybox-7b57f96db7-sf72s -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 kubectl -- exec busybox-7b57f96db7-wb4s9 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 kubectl -- exec busybox-7b57f96db7-l4b7j -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 kubectl -- exec busybox-7b57f96db7-sf72s -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 kubectl -- exec busybox-7b57f96db7-wb4s9 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 kubectl -- exec busybox-7b57f96db7-l4b7j -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 kubectl -- exec busybox-7b57f96db7-sf72s -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 kubectl -- exec busybox-7b57f96db7-wb4s9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 kubectl -- exec busybox-7b57f96db7-l4b7j -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 kubectl -- exec busybox-7b57f96db7-l4b7j -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 kubectl -- exec busybox-7b57f96db7-sf72s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 kubectl -- exec busybox-7b57f96db7-sf72s -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 kubectl -- exec busybox-7b57f96db7-wb4s9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 kubectl -- exec busybox-7b57f96db7-wb4s9 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (60.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 node add --alsologtostderr -v 5
E1102 13:37:22.584299  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-367047 node add --alsologtostderr -v 5: (59.728180109s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-367047 status --alsologtostderr -v 5: (1.043171104s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (60.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-367047 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.065515262s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-367047 status --output json --alsologtostderr -v 5: (1.026554423s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 cp testdata/cp-test.txt ha-367047:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 ssh -n ha-367047 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 cp ha-367047:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2508166875/001/cp-test_ha-367047.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 ssh -n ha-367047 "sudo cat /home/docker/cp-test.txt"
E1102 13:37:50.129089  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/functional-082350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:37:50.135651  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/functional-082350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:37:50.147016  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/functional-082350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:37:50.168603  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/functional-082350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:37:50.210877  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/functional-082350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 cp ha-367047:/home/docker/cp-test.txt ha-367047-m02:/home/docker/cp-test_ha-367047_ha-367047-m02.txt
E1102 13:37:50.292630  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/functional-082350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:37:50.454055  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/functional-082350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 ssh -n ha-367047 "sudo cat /home/docker/cp-test.txt"
E1102 13:37:50.776241  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/functional-082350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 ssh -n ha-367047-m02 "sudo cat /home/docker/cp-test_ha-367047_ha-367047-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 cp ha-367047:/home/docker/cp-test.txt ha-367047-m03:/home/docker/cp-test_ha-367047_ha-367047-m03.txt
E1102 13:37:51.417705  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/functional-082350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 ssh -n ha-367047 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 ssh -n ha-367047-m03 "sudo cat /home/docker/cp-test_ha-367047_ha-367047-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 cp ha-367047:/home/docker/cp-test.txt ha-367047-m04:/home/docker/cp-test_ha-367047_ha-367047-m04.txt
E1102 13:37:52.700058  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/functional-082350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 ssh -n ha-367047 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 ssh -n ha-367047-m04 "sudo cat /home/docker/cp-test_ha-367047_ha-367047-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 cp testdata/cp-test.txt ha-367047-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 ssh -n ha-367047-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 cp ha-367047-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2508166875/001/cp-test_ha-367047-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 ssh -n ha-367047-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 cp ha-367047-m02:/home/docker/cp-test.txt ha-367047:/home/docker/cp-test_ha-367047-m02_ha-367047.txt
E1102 13:37:55.261375  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/functional-082350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 ssh -n ha-367047-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 ssh -n ha-367047 "sudo cat /home/docker/cp-test_ha-367047-m02_ha-367047.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 cp ha-367047-m02:/home/docker/cp-test.txt ha-367047-m03:/home/docker/cp-test_ha-367047-m02_ha-367047-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 ssh -n ha-367047-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 ssh -n ha-367047-m03 "sudo cat /home/docker/cp-test_ha-367047-m02_ha-367047-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 cp ha-367047-m02:/home/docker/cp-test.txt ha-367047-m04:/home/docker/cp-test_ha-367047-m02_ha-367047-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 ssh -n ha-367047-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 ssh -n ha-367047-m04 "sudo cat /home/docker/cp-test_ha-367047-m02_ha-367047-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 cp testdata/cp-test.txt ha-367047-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 ssh -n ha-367047-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 cp ha-367047-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2508166875/001/cp-test_ha-367047-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 ssh -n ha-367047-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 cp ha-367047-m03:/home/docker/cp-test.txt ha-367047:/home/docker/cp-test_ha-367047-m03_ha-367047.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 ssh -n ha-367047-m03 "sudo cat /home/docker/cp-test.txt"
E1102 13:38:00.382872  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/functional-082350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 ssh -n ha-367047 "sudo cat /home/docker/cp-test_ha-367047-m03_ha-367047.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 cp ha-367047-m03:/home/docker/cp-test.txt ha-367047-m02:/home/docker/cp-test_ha-367047-m03_ha-367047-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 ssh -n ha-367047-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 ssh -n ha-367047-m02 "sudo cat /home/docker/cp-test_ha-367047-m03_ha-367047-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 cp ha-367047-m03:/home/docker/cp-test.txt ha-367047-m04:/home/docker/cp-test_ha-367047-m03_ha-367047-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 ssh -n ha-367047-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 ssh -n ha-367047-m04 "sudo cat /home/docker/cp-test_ha-367047-m03_ha-367047-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 cp testdata/cp-test.txt ha-367047-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 ssh -n ha-367047-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 cp ha-367047-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2508166875/001/cp-test_ha-367047-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 ssh -n ha-367047-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 cp ha-367047-m04:/home/docker/cp-test.txt ha-367047:/home/docker/cp-test_ha-367047-m04_ha-367047.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 ssh -n ha-367047-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 ssh -n ha-367047 "sudo cat /home/docker/cp-test_ha-367047-m04_ha-367047.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 cp ha-367047-m04:/home/docker/cp-test.txt ha-367047-m02:/home/docker/cp-test_ha-367047-m04_ha-367047-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 ssh -n ha-367047-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 ssh -n ha-367047-m02 "sudo cat /home/docker/cp-test_ha-367047-m04_ha-367047-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 cp ha-367047-m04:/home/docker/cp-test.txt ha-367047-m03:/home/docker/cp-test_ha-367047-m04_ha-367047-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 ssh -n ha-367047-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 ssh -n ha-367047-m03 "sudo cat /home/docker/cp-test_ha-367047-m04_ha-367047-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 node stop m02 --alsologtostderr -v 5
E1102 13:38:10.624834  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/functional-082350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-367047 node stop m02 --alsologtostderr -v 5: (12.040601362s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-367047 status --alsologtostderr -v 5: exit status 7 (793.755759ms)

                                                
                                                
-- stdout --
	ha-367047
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-367047-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-367047-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-367047-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 13:38:19.950830  339027 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:38:19.951014  339027 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:38:19.951027  339027 out.go:374] Setting ErrFile to fd 2...
	I1102 13:38:19.951033  339027 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:38:19.951348  339027 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 13:38:19.951604  339027 out.go:368] Setting JSON to false
	I1102 13:38:19.951655  339027 mustload.go:66] Loading cluster: ha-367047
	I1102 13:38:19.951724  339027 notify.go:221] Checking for updates...
	I1102 13:38:19.952193  339027 config.go:182] Loaded profile config "ha-367047": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:38:19.952212  339027 status.go:174] checking status of ha-367047 ...
	I1102 13:38:19.953117  339027 cli_runner.go:164] Run: docker container inspect ha-367047 --format={{.State.Status}}
	I1102 13:38:19.976418  339027 status.go:371] ha-367047 host status = "Running" (err=<nil>)
	I1102 13:38:19.976449  339027 host.go:66] Checking if "ha-367047" exists ...
	I1102 13:38:19.977267  339027 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-367047
	I1102 13:38:20.009703  339027 host.go:66] Checking if "ha-367047" exists ...
	I1102 13:38:20.010036  339027 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 13:38:20.010098  339027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-367047
	I1102 13:38:20.032809  339027 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/ha-367047/id_rsa Username:docker}
	I1102 13:38:20.145533  339027 ssh_runner.go:195] Run: systemctl --version
	I1102 13:38:20.153151  339027 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:38:20.166571  339027 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:38:20.243179  339027 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-11-02 13:38:20.231661747 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 13:38:20.243763  339027 kubeconfig.go:125] found "ha-367047" server: "https://192.168.49.254:8443"
	I1102 13:38:20.243804  339027 api_server.go:166] Checking apiserver status ...
	I1102 13:38:20.243853  339027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 13:38:20.256297  339027 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1271/cgroup
	I1102 13:38:20.265116  339027 api_server.go:182] apiserver freezer: "4:freezer:/docker/944d48ecb51fd031e6bf37f073d14a8e89837be946460b4fa7999f3c9926c303/crio/crio-472e32ed35e448b49a61b2da4a65d840fd36e0d52b4e57904d0eb6a5ffdd4bdb"
	I1102 13:38:20.265194  339027 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/944d48ecb51fd031e6bf37f073d14a8e89837be946460b4fa7999f3c9926c303/crio/crio-472e32ed35e448b49a61b2da4a65d840fd36e0d52b4e57904d0eb6a5ffdd4bdb/freezer.state
	I1102 13:38:20.272843  339027 api_server.go:204] freezer state: "THAWED"
	I1102 13:38:20.272879  339027 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1102 13:38:20.281423  339027 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1102 13:38:20.281451  339027 status.go:463] ha-367047 apiserver status = Running (err=<nil>)
	I1102 13:38:20.281463  339027 status.go:176] ha-367047 status: &{Name:ha-367047 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1102 13:38:20.281480  339027 status.go:174] checking status of ha-367047-m02 ...
	I1102 13:38:20.281829  339027 cli_runner.go:164] Run: docker container inspect ha-367047-m02 --format={{.State.Status}}
	I1102 13:38:20.299843  339027 status.go:371] ha-367047-m02 host status = "Stopped" (err=<nil>)
	I1102 13:38:20.299867  339027 status.go:384] host is not running, skipping remaining checks
	I1102 13:38:20.299873  339027 status.go:176] ha-367047-m02 status: &{Name:ha-367047-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1102 13:38:20.299893  339027 status.go:174] checking status of ha-367047-m03 ...
	I1102 13:38:20.300201  339027 cli_runner.go:164] Run: docker container inspect ha-367047-m03 --format={{.State.Status}}
	I1102 13:38:20.316902  339027 status.go:371] ha-367047-m03 host status = "Running" (err=<nil>)
	I1102 13:38:20.316931  339027 host.go:66] Checking if "ha-367047-m03" exists ...
	I1102 13:38:20.317240  339027 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-367047-m03
	I1102 13:38:20.334501  339027 host.go:66] Checking if "ha-367047-m03" exists ...
	I1102 13:38:20.334859  339027 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 13:38:20.334909  339027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-367047-m03
	I1102 13:38:20.353694  339027 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/ha-367047-m03/id_rsa Username:docker}
	I1102 13:38:20.455975  339027 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:38:20.469168  339027 kubeconfig.go:125] found "ha-367047" server: "https://192.168.49.254:8443"
	I1102 13:38:20.469195  339027 api_server.go:166] Checking apiserver status ...
	I1102 13:38:20.469234  339027 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 13:38:20.480285  339027 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1210/cgroup
	I1102 13:38:20.488421  339027 api_server.go:182] apiserver freezer: "4:freezer:/docker/2c51f6808da7e0bdf272b86523e4ecdf7291706f8c73ffc859e1f8c65b6c0ddd/crio/crio-0186770acaac67cc36015b618db9a1a16156d5bb25bfe610f2dcaa85ef0f5bab"
	I1102 13:38:20.488493  339027 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/2c51f6808da7e0bdf272b86523e4ecdf7291706f8c73ffc859e1f8c65b6c0ddd/crio/crio-0186770acaac67cc36015b618db9a1a16156d5bb25bfe610f2dcaa85ef0f5bab/freezer.state
	I1102 13:38:20.499222  339027 api_server.go:204] freezer state: "THAWED"
	I1102 13:38:20.499258  339027 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1102 13:38:20.508084  339027 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1102 13:38:20.508112  339027 status.go:463] ha-367047-m03 apiserver status = Running (err=<nil>)
	I1102 13:38:20.508123  339027 status.go:176] ha-367047-m03 status: &{Name:ha-367047-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1102 13:38:20.508140  339027 status.go:174] checking status of ha-367047-m04 ...
	I1102 13:38:20.508456  339027 cli_runner.go:164] Run: docker container inspect ha-367047-m04 --format={{.State.Status}}
	I1102 13:38:20.525188  339027 status.go:371] ha-367047-m04 host status = "Running" (err=<nil>)
	I1102 13:38:20.525215  339027 host.go:66] Checking if "ha-367047-m04" exists ...
	I1102 13:38:20.525515  339027 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-367047-m04
	I1102 13:38:20.542017  339027 host.go:66] Checking if "ha-367047-m04" exists ...
	I1102 13:38:20.542360  339027 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 13:38:20.542401  339027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-367047-m04
	I1102 13:38:20.564640  339027 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/ha-367047-m04/id_rsa Username:docker}
	I1102 13:38:20.672679  339027 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:38:20.685210  339027 status.go:176] ha-367047-m04 status: &{Name:ha-367047-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (28.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 node start m02 --alsologtostderr -v 5
E1102 13:38:31.106464  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/functional-082350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-367047 node start m02 --alsologtostderr -v 5: (26.82617577s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-367047 status --alsologtostderr -v 5: (1.147944131s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (28.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.240928166s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (126.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 stop --alsologtostderr -v 5
E1102 13:39:12.068470  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/functional-082350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-367047 stop --alsologtostderr -v 5: (27.489006669s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 start --wait true --alsologtostderr -v 5
E1102 13:40:33.989903  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/functional-082350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-367047 start --wait true --alsologtostderr -v 5: (1m39.066596671s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (126.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 node delete m03 --alsologtostderr -v 5
E1102 13:40:59.511053  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-367047 node delete m03 --alsologtostderr -v 5: (10.805498284s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-367047 stop --alsologtostderr -v 5: (35.927560152s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-367047 status --alsologtostderr -v 5: exit status 7 (111.825171ms)

                                                
                                                
-- stdout --
	ha-367047
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-367047-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-367047-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 13:41:46.098794  351025 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:41:46.098992  351025 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:41:46.099004  351025 out.go:374] Setting ErrFile to fd 2...
	I1102 13:41:46.099009  351025 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:41:46.099282  351025 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 13:41:46.099471  351025 out.go:368] Setting JSON to false
	I1102 13:41:46.099501  351025 mustload.go:66] Loading cluster: ha-367047
	I1102 13:41:46.099544  351025 notify.go:221] Checking for updates...
	I1102 13:41:46.099913  351025 config.go:182] Loaded profile config "ha-367047": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:41:46.099942  351025 status.go:174] checking status of ha-367047 ...
	I1102 13:41:46.100779  351025 cli_runner.go:164] Run: docker container inspect ha-367047 --format={{.State.Status}}
	I1102 13:41:46.119007  351025 status.go:371] ha-367047 host status = "Stopped" (err=<nil>)
	I1102 13:41:46.119030  351025 status.go:384] host is not running, skipping remaining checks
	I1102 13:41:46.119037  351025 status.go:176] ha-367047 status: &{Name:ha-367047 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1102 13:41:46.119085  351025 status.go:174] checking status of ha-367047-m02 ...
	I1102 13:41:46.119391  351025 cli_runner.go:164] Run: docker container inspect ha-367047-m02 --format={{.State.Status}}
	I1102 13:41:46.142362  351025 status.go:371] ha-367047-m02 host status = "Stopped" (err=<nil>)
	I1102 13:41:46.142405  351025 status.go:384] host is not running, skipping remaining checks
	I1102 13:41:46.142422  351025 status.go:176] ha-367047-m02 status: &{Name:ha-367047-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1102 13:41:46.142441  351025 status.go:174] checking status of ha-367047-m04 ...
	I1102 13:41:46.142748  351025 cli_runner.go:164] Run: docker container inspect ha-367047-m04 --format={{.State.Status}}
	I1102 13:41:46.158977  351025 status.go:371] ha-367047-m04 host status = "Stopped" (err=<nil>)
	I1102 13:41:46.159000  351025 status.go:384] host is not running, skipping remaining checks
	I1102 13:41:46.159007  351025 status.go:176] ha-367047-m04 status: &{Name:ha-367047-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (94.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1102 13:42:50.128715  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/functional-082350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:43:17.831987  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/functional-082350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-367047 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m33.080576109s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (94.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (81.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-367047 node add --control-plane --alsologtostderr -v 5: (1m20.436220845s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-367047 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-367047 status --alsologtostderr -v 5: (1.131248664s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (81.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.042938644s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.04s)

                                                
                                    
x
+
TestJSONOutput/start/Command (79.62s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-430911 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1102 13:45:59.519163  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-430911 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m19.614812376s)
--- PASS: TestJSONOutput/start/Command (79.62s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.85s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-430911 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-430911 --output=json --user=testUser: (5.851872009s)
--- PASS: TestJSONOutput/stop/Command (5.85s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-995209 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-995209 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (97.954999ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c1d954b4-6246-4405-858e-59e42b24e393","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-995209] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1c6dacd3-1add-444e-bc4a-ba2b2c37201e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21808"}}
	{"specversion":"1.0","id":"3c4cb94b-5f29-47e2-a3e0-2ca428801d87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cff9bf58-47d7-4484-a82e-904223df457d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21808-293314/kubeconfig"}}
	{"specversion":"1.0","id":"32aa3946-255a-42b9-a6d2-0864b7c03db5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-293314/.minikube"}}
	{"specversion":"1.0","id":"58b525c2-2305-4109-b8b7-dedf66675b58","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"69c6fdc9-ea20-4dc4-8105-fa92461c09d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"43c6dbc5-0bfc-49ee-a97d-e086a4582149","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-995209" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-995209
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.93s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-808212 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-808212 --network=: (37.708717781s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-808212" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-808212
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-808212: (2.193293784s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.93s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (39.96s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-481582 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-481582 --network=bridge: (37.85674055s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-481582" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-481582
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-481582: (2.075169175s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (39.96s)

                                                
                                    
x
+
TestKicExistingNetwork (35.93s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1102 13:47:46.233155  295174 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1102 13:47:46.251508  295174 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1102 13:47:46.252497  295174 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1102 13:47:46.252535  295174 cli_runner.go:164] Run: docker network inspect existing-network
W1102 13:47:46.267942  295174 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1102 13:47:46.267975  295174 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1102 13:47:46.267993  295174 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1102 13:47:46.268115  295174 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1102 13:47:46.286473  295174 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ddf319108ac9 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:66:f7:2d:49:67:ff} reservation:<nil>}
I1102 13:47:46.288970  295174 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40016f50d0}
I1102 13:47:46.289007  295174 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1102 13:47:46.289059  295174 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1102 13:47:46.355563  295174 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-592088 --network=existing-network
E1102 13:47:50.129608  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/functional-082350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-592088 --network=existing-network: (33.671408482s)
helpers_test.go:175: Cleaning up "existing-network-592088" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-592088
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-592088: (2.103457654s)
I1102 13:48:22.146434  295174 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (35.93s)

                                                
                                    
x
+
TestKicCustomSubnet (39.32s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-138730 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-138730 --subnet=192.168.60.0/24: (37.104793351s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-138730 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-138730" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-138730
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-138730: (2.187928407s)
--- PASS: TestKicCustomSubnet (39.32s)

                                                
                                    
x
+
TestKicStaticIP (35.9s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-191432 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-191432 --static-ip=192.168.200.200: (33.562544153s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-191432 ip
helpers_test.go:175: Cleaning up "static-ip-191432" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-191432
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-191432: (2.177116555s)
--- PASS: TestKicStaticIP (35.90s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (73.5s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-667638 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-667638 --driver=docker  --container-runtime=crio: (32.315159899s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-670157 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-670157 --driver=docker  --container-runtime=crio: (35.583967355s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-667638
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-670157
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-670157" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-670157
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-670157: (2.095071235s)
helpers_test.go:175: Cleaning up "first-667638" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-667638
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-667638: (2.032700781s)
--- PASS: TestMinikubeProfile (73.50s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.13s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-407885 --memory=3072 --mount-string /tmp/TestMountStartserial3783784486/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-407885 --memory=3072 --mount-string /tmp/TestMountStartserial3783784486/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.127522832s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.13s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-407885 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.81s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-410215 --memory=3072 --mount-string /tmp/TestMountStartserial3783784486/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E1102 13:50:59.510774  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-410215 --memory=3072 --mount-string /tmp/TestMountStartserial3783784486/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.810929083s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.81s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-410215 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-407885 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-407885 --alsologtostderr -v=5: (1.725818155s)
--- PASS: TestMountStart/serial/DeleteFirst (1.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-410215 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-410215
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-410215: (1.289238728s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.01s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-410215
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-410215: (7.009459925s)
--- PASS: TestMountStart/serial/RestartStopped (8.01s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-410215 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (141s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-731545 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1102 13:52:50.129390  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/functional-082350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-731545 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m20.452124725s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (141.00s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-731545 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-731545 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-731545 -- rollout status deployment/busybox: (3.070245218s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-731545 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-731545 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-731545 -- exec busybox-7b57f96db7-9q7gv -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-731545 -- exec busybox-7b57f96db7-lz6bf -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-731545 -- exec busybox-7b57f96db7-9q7gv -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-731545 -- exec busybox-7b57f96db7-lz6bf -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-731545 -- exec busybox-7b57f96db7-9q7gv -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-731545 -- exec busybox-7b57f96db7-lz6bf -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.82s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-731545 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-731545 -- exec busybox-7b57f96db7-9q7gv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-731545 -- exec busybox-7b57f96db7-9q7gv -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-731545 -- exec busybox-7b57f96db7-lz6bf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-731545 -- exec busybox-7b57f96db7-lz6bf -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (57.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-731545 -v=5 --alsologtostderr
E1102 13:54:02.586832  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 13:54:13.194169  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/functional-082350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-731545 -v=5 --alsologtostderr: (57.030531682s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (57.74s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-731545 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 cp testdata/cp-test.txt multinode-731545:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 ssh -n multinode-731545 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 cp multinode-731545:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2354599737/001/cp-test_multinode-731545.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 ssh -n multinode-731545 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 cp multinode-731545:/home/docker/cp-test.txt multinode-731545-m02:/home/docker/cp-test_multinode-731545_multinode-731545-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 ssh -n multinode-731545 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 ssh -n multinode-731545-m02 "sudo cat /home/docker/cp-test_multinode-731545_multinode-731545-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 cp multinode-731545:/home/docker/cp-test.txt multinode-731545-m03:/home/docker/cp-test_multinode-731545_multinode-731545-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 ssh -n multinode-731545 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 ssh -n multinode-731545-m03 "sudo cat /home/docker/cp-test_multinode-731545_multinode-731545-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 cp testdata/cp-test.txt multinode-731545-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 ssh -n multinode-731545-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 cp multinode-731545-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2354599737/001/cp-test_multinode-731545-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 ssh -n multinode-731545-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 cp multinode-731545-m02:/home/docker/cp-test.txt multinode-731545:/home/docker/cp-test_multinode-731545-m02_multinode-731545.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 ssh -n multinode-731545-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 ssh -n multinode-731545 "sudo cat /home/docker/cp-test_multinode-731545-m02_multinode-731545.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 cp multinode-731545-m02:/home/docker/cp-test.txt multinode-731545-m03:/home/docker/cp-test_multinode-731545-m02_multinode-731545-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 ssh -n multinode-731545-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 ssh -n multinode-731545-m03 "sudo cat /home/docker/cp-test_multinode-731545-m02_multinode-731545-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 cp testdata/cp-test.txt multinode-731545-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 ssh -n multinode-731545-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 cp multinode-731545-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2354599737/001/cp-test_multinode-731545-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 ssh -n multinode-731545-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 cp multinode-731545-m03:/home/docker/cp-test.txt multinode-731545:/home/docker/cp-test_multinode-731545-m03_multinode-731545.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 ssh -n multinode-731545-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 ssh -n multinode-731545 "sudo cat /home/docker/cp-test_multinode-731545-m03_multinode-731545.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 cp multinode-731545-m03:/home/docker/cp-test.txt multinode-731545-m02:/home/docker/cp-test_multinode-731545-m03_multinode-731545-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 ssh -n multinode-731545-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 ssh -n multinode-731545-m02 "sudo cat /home/docker/cp-test_multinode-731545-m03_multinode-731545-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.45s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-731545 node stop m03: (1.312924679s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-731545 status: exit status 7 (543.395155ms)

                                                
                                                
-- stdout --
	multinode-731545
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-731545-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-731545-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-731545 status --alsologtostderr: exit status 7 (668.835675ms)

                                                
                                                
-- stdout --
	multinode-731545
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-731545-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-731545-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 13:54:59.499171  401914 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:54:59.499310  401914 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:54:59.499321  401914 out.go:374] Setting ErrFile to fd 2...
	I1102 13:54:59.499340  401914 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:54:59.499610  401914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 13:54:59.499829  401914 out.go:368] Setting JSON to false
	I1102 13:54:59.499874  401914 mustload.go:66] Loading cluster: multinode-731545
	I1102 13:54:59.499950  401914 notify.go:221] Checking for updates...
	I1102 13:54:59.500308  401914 config.go:182] Loaded profile config "multinode-731545": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:54:59.500328  401914 status.go:174] checking status of multinode-731545 ...
	I1102 13:54:59.501169  401914 cli_runner.go:164] Run: docker container inspect multinode-731545 --format={{.State.Status}}
	I1102 13:54:59.520798  401914 status.go:371] multinode-731545 host status = "Running" (err=<nil>)
	I1102 13:54:59.520824  401914 host.go:66] Checking if "multinode-731545" exists ...
	I1102 13:54:59.521114  401914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-731545
	I1102 13:54:59.555881  401914 host.go:66] Checking if "multinode-731545" exists ...
	I1102 13:54:59.556199  401914 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 13:54:59.556255  401914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-731545
	I1102 13:54:59.578099  401914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33273 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/multinode-731545/id_rsa Username:docker}
	I1102 13:54:59.680320  401914 ssh_runner.go:195] Run: systemctl --version
	I1102 13:54:59.686851  401914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:54:59.700675  401914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 13:54:59.785799  401914 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-02 13:54:59.767382352 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 13:54:59.786338  401914 kubeconfig.go:125] found "multinode-731545" server: "https://192.168.67.2:8443"
	I1102 13:54:59.786372  401914 api_server.go:166] Checking apiserver status ...
	I1102 13:54:59.786430  401914 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1102 13:54:59.798206  401914 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1274/cgroup
	I1102 13:54:59.806580  401914 api_server.go:182] apiserver freezer: "4:freezer:/docker/bca02ad63527862e6a2cc44a1d54511f00d7ae807d3d8776428572b3bbc08d4b/crio/crio-cd9391a4ab6ec1beb867b77ac93a3025e3c2d632bdeade84b4bea9991cf37f75"
	I1102 13:54:59.806682  401914 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/bca02ad63527862e6a2cc44a1d54511f00d7ae807d3d8776428572b3bbc08d4b/crio/crio-cd9391a4ab6ec1beb867b77ac93a3025e3c2d632bdeade84b4bea9991cf37f75/freezer.state
	I1102 13:54:59.814376  401914 api_server.go:204] freezer state: "THAWED"
	I1102 13:54:59.814403  401914 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1102 13:54:59.822990  401914 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1102 13:54:59.823017  401914 status.go:463] multinode-731545 apiserver status = Running (err=<nil>)
	I1102 13:54:59.823029  401914 status.go:176] multinode-731545 status: &{Name:multinode-731545 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1102 13:54:59.823046  401914 status.go:174] checking status of multinode-731545-m02 ...
	I1102 13:54:59.823353  401914 cli_runner.go:164] Run: docker container inspect multinode-731545-m02 --format={{.State.Status}}
	I1102 13:54:59.845002  401914 status.go:371] multinode-731545-m02 host status = "Running" (err=<nil>)
	I1102 13:54:59.845029  401914 host.go:66] Checking if "multinode-731545-m02" exists ...
	I1102 13:54:59.845383  401914 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-731545-m02
	I1102 13:54:59.862912  401914 host.go:66] Checking if "multinode-731545-m02" exists ...
	I1102 13:54:59.863223  401914 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1102 13:54:59.863268  401914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-731545-m02
	I1102 13:54:59.880893  401914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/21808-293314/.minikube/machines/multinode-731545-m02/id_rsa Username:docker}
	I1102 13:54:59.984141  401914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1102 13:55:00.003359  401914 status.go:176] multinode-731545-m02 status: &{Name:multinode-731545-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1102 13:55:00.003405  401914 status.go:174] checking status of multinode-731545-m03 ...
	I1102 13:55:00.003731  401914 cli_runner.go:164] Run: docker container inspect multinode-731545-m03 --format={{.State.Status}}
	I1102 13:55:00.110328  401914 status.go:371] multinode-731545-m03 host status = "Stopped" (err=<nil>)
	I1102 13:55:00.110353  401914 status.go:384] host is not running, skipping remaining checks
	I1102 13:55:00.110361  401914 status.go:176] multinode-731545-m03 status: &{Name:multinode-731545-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.53s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-731545 node start m03 -v=5 --alsologtostderr: (8.052380914s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.85s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (80.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-731545
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-731545
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-731545: (25.100840701s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-731545 --wait=true -v=5 --alsologtostderr
E1102 13:55:59.510835  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-731545 --wait=true -v=5 --alsologtostderr: (55.532456789s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-731545
--- PASS: TestMultiNode/serial/RestartKeepsNodes (80.75s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-731545 node delete m03: (5.02423846s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.71s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-731545 stop: (23.786703878s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-731545 status: exit status 7 (88.659951ms)

                                                
                                                
-- stdout --
	multinode-731545
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-731545-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-731545 status --alsologtostderr: exit status 7 (95.842624ms)

                                                
                                                
-- stdout --
	multinode-731545
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-731545-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 13:56:59.361554  409833 out.go:360] Setting OutFile to fd 1 ...
	I1102 13:56:59.361664  409833 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:56:59.361674  409833 out.go:374] Setting ErrFile to fd 2...
	I1102 13:56:59.361678  409833 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 13:56:59.361925  409833 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 13:56:59.362101  409833 out.go:368] Setting JSON to false
	I1102 13:56:59.362133  409833 mustload.go:66] Loading cluster: multinode-731545
	I1102 13:56:59.362162  409833 notify.go:221] Checking for updates...
	I1102 13:56:59.362505  409833 config.go:182] Loaded profile config "multinode-731545": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 13:56:59.362515  409833 status.go:174] checking status of multinode-731545 ...
	I1102 13:56:59.363063  409833 cli_runner.go:164] Run: docker container inspect multinode-731545 --format={{.State.Status}}
	I1102 13:56:59.381709  409833 status.go:371] multinode-731545 host status = "Stopped" (err=<nil>)
	I1102 13:56:59.381731  409833 status.go:384] host is not running, skipping remaining checks
	I1102 13:56:59.381738  409833 status.go:176] multinode-731545 status: &{Name:multinode-731545 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1102 13:56:59.381769  409833 status.go:174] checking status of multinode-731545-m02 ...
	I1102 13:56:59.382069  409833 cli_runner.go:164] Run: docker container inspect multinode-731545-m02 --format={{.State.Status}}
	I1102 13:56:59.408197  409833 status.go:371] multinode-731545-m02 host status = "Stopped" (err=<nil>)
	I1102 13:56:59.408226  409833 status.go:384] host is not running, skipping remaining checks
	I1102 13:56:59.408234  409833 status.go:176] multinode-731545-m02 status: &{Name:multinode-731545-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.97s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-731545 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-731545 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (50.614454495s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-731545 status --alsologtostderr
E1102 13:57:50.129188  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/functional-082350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.31s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-731545
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-731545-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-731545-m02 --driver=docker  --container-runtime=crio: exit status 14 (95.584259ms)

                                                
                                                
-- stdout --
	* [multinode-731545-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21808
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21808-293314/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-293314/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-731545-m02' is duplicated with machine name 'multinode-731545-m02' in profile 'multinode-731545'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-731545-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-731545-m03 --driver=docker  --container-runtime=crio: (35.152759093s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-731545
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-731545: exit status 80 (349.120735ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-731545 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-731545-m03 already exists in multinode-731545-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-731545-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-731545-m03: (2.087081912s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.74s)

                                                
                                    
x
+
TestPreload (126.33s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-997485 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-997485 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (1m0.867085923s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-997485 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-997485 image pull gcr.io/k8s-minikube/busybox: (2.534015111s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-997485
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-997485: (5.955651326s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-997485 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-997485 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (54.274768267s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-997485 image list
helpers_test.go:175: Cleaning up "test-preload-997485" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-997485
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-997485: (2.457891361s)
--- PASS: TestPreload (126.33s)

                                                
                                    
x
+
TestInsufficientStorage (13.95s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-343181 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-343181 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (11.367154674s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5a18d874-e0a1-4b3e-b701-65b485752e4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-343181] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f80d0c67-214d-4ff9-a44e-9ce3fe07dfa0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21808"}}
	{"specversion":"1.0","id":"0e83e665-4d5c-40bd-baea-2d9b2e8fc0e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0e6a1860-2fd0-4118-af24-bc65590dc954","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21808-293314/kubeconfig"}}
	{"specversion":"1.0","id":"1891bd30-eb37-4940-98f6-8ab3b44639d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-293314/.minikube"}}
	{"specversion":"1.0","id":"eace0139-971f-447e-b670-1cba796b0a06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"74a78a5f-cd52-4083-bdd2-fa07b52f9fc2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"70de20fd-486f-424c-b8f0-8e2cb1b4e76c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"ca44c288-6672-4c1e-936f-67efaa9b1723","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"78c921ad-8133-4320-a3a4-8adbed98ea38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a97b8bd9-638c-4755-aa15-83dffb28c141","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"475eea16-8db1-4356-8c25-f3b490a9a562","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-343181\" primary control-plane node in \"insufficient-storage-343181\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f724de0a-72ff-4bc2-8a7a-a41be73b2e9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760939008-21773 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"37ad46d5-f817-4197-ba0d-a32b3aefde6b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"1fccb7d9-54bd-4026-aa37-c7b2221a9910","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-343181 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-343181 --output=json --layout=cluster: exit status 7 (308.116309ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-343181","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-343181","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1102 14:01:31.825633  426072 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-343181" does not appear in /home/jenkins/minikube-integration/21808-293314/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-343181 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-343181 --output=json --layout=cluster: exit status 7 (303.428135ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-343181","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-343181","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1102 14:01:32.133721  426138 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-343181" does not appear in /home/jenkins/minikube-integration/21808-293314/kubeconfig
	E1102 14:01:32.143507  426138 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/insufficient-storage-343181/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-343181" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-343181
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-343181: (1.974884069s)
--- PASS: TestInsufficientStorage (13.95s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (55.92s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.942035865 start -p running-upgrade-794461 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.942035865 start -p running-upgrade-794461 --memory=3072 --vm-driver=docker  --container-runtime=crio: (34.732208741s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-794461 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-794461 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (18.404368324s)
helpers_test.go:175: Cleaning up "running-upgrade-794461" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-794461
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-794461: (1.989208964s)
--- PASS: TestRunningBinaryUpgrade (55.92s)

                                                
                                    
x
+
TestKubernetesUpgrade (215.37s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-271267 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-271267 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (49.655077586s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-271267
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-271267: (1.488288822s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-271267 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-271267 status --format={{.Host}}: exit status 7 (99.522745ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-271267 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-271267 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (2m10.259816491s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-271267 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-271267 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-271267 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (118.244898ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-271267] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21808
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21808-293314/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-293314/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-271267
	    minikube start -p kubernetes-upgrade-271267 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2712672 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-271267 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-271267 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-271267 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (31.114778317s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-271267" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-271267
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-271267: (2.503965573s)
--- PASS: TestKubernetesUpgrade (215.37s)

                                                
                                    
x
+
TestMissingContainerUpgrade (114.05s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.291224776 start -p missing-upgrade-835871 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.291224776 start -p missing-upgrade-835871 --memory=3072 --driver=docker  --container-runtime=crio: (1m4.909673232s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-835871
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-835871
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-835871 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1102 14:02:50.129217  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/functional-082350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-835871 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (45.538808387s)
helpers_test.go:175: Cleaning up "missing-upgrade-835871" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-835871
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-835871: (1.987585698s)
--- PASS: TestMissingContainerUpgrade (114.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-878391 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-878391 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (112.902818ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-878391] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21808
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21808-293314/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-293314/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (48.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-878391 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-878391 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (48.208654694s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-878391 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (48.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (30.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-878391 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-878391 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (28.078513072s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-878391 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-878391 status -o json: exit status 2 (322.342651ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-878391","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-878391
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-878391: (1.96712427s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (30.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-878391 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-878391 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (10.80585808s)
--- PASS: TestNoKubernetes/serial/Start (10.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-878391 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-878391 "sudo systemctl is-active --quiet service kubelet": exit status 1 (286.581757ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-878391
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-878391: (1.287087505s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-878391 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-878391 --driver=docker  --container-runtime=crio: (8.599681762s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-878391 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-878391 "sudo systemctl is-active --quiet service kubelet": exit status 1 (374.612955ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.76s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.76s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (66.3s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.21377286 start -p stopped-upgrade-027883 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.21377286 start -p stopped-upgrade-027883 --memory=3072 --vm-driver=docker  --container-runtime=crio: (38.416450842s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.21377286 -p stopped-upgrade-027883 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.21377286 -p stopped-upgrade-027883 stop: (1.293884678s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-027883 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-027883 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (26.59137057s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (66.30s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-027883
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-027883: (1.658168241s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.66s)

                                                
                                    
x
+
TestPause/serial/Start (86.45s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-061518 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1102 14:05:59.510696  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-061518 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m26.451697785s)
--- PASS: TestPause/serial/Start (86.45s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (40.42s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-061518 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-061518 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.386408262s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (40.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-143736 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-143736 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (197.874375ms)

                                                
                                                
-- stdout --
	* [false-143736] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21808
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21808-293314/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-293314/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1102 14:07:37.866343  461820 out.go:360] Setting OutFile to fd 1 ...
	I1102 14:07:37.866460  461820 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:07:37.866529  461820 out.go:374] Setting ErrFile to fd 2...
	I1102 14:07:37.866542  461820 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1102 14:07:37.866898  461820 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21808-293314/.minikube/bin
	I1102 14:07:37.867393  461820 out.go:368] Setting JSON to false
	I1102 14:07:37.868308  461820 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10210,"bootTime":1762082248,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1102 14:07:37.868382  461820 start.go:143] virtualization:  
	I1102 14:07:37.871966  461820 out.go:179] * [false-143736] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1102 14:07:37.875850  461820 out.go:179]   - MINIKUBE_LOCATION=21808
	I1102 14:07:37.875961  461820 notify.go:221] Checking for updates...
	I1102 14:07:37.881697  461820 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1102 14:07:37.884582  461820 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21808-293314/kubeconfig
	I1102 14:07:37.887451  461820 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21808-293314/.minikube
	I1102 14:07:37.890344  461820 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1102 14:07:37.893381  461820 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1102 14:07:37.896848  461820 config.go:182] Loaded profile config "pause-061518": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1102 14:07:37.896961  461820 driver.go:422] Setting default libvirt URI to qemu:///system
	I1102 14:07:37.927947  461820 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1102 14:07:37.928073  461820 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1102 14:07:37.986102  461820 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-02 14:07:37.975412553 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1102 14:07:37.986209  461820 docker.go:319] overlay module found
	I1102 14:07:37.989650  461820 out.go:179] * Using the docker driver based on user configuration
	I1102 14:07:37.992529  461820 start.go:309] selected driver: docker
	I1102 14:07:37.992552  461820 start.go:930] validating driver "docker" against <nil>
	I1102 14:07:37.992567  461820 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1102 14:07:37.996147  461820 out.go:203] 
	W1102 14:07:37.999044  461820 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1102 14:07:38.001911  461820 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-143736 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-143736

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-143736

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-143736

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-143736

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-143736

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-143736

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-143736

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-143736

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-143736

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-143736

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-143736"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-143736"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-143736"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-143736

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-143736"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-143736"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-143736" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-143736" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-143736" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-143736" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-143736" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-143736" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-143736" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-143736" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-143736"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-143736"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-143736"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-143736"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-143736"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-143736" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-143736" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-143736" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-143736"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-143736"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-143736"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-143736"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-143736"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 02 Nov 2025 14:07:13 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-061518
contexts:
- context:
cluster: pause-061518
extensions:
- extension:
last-update: Sun, 02 Nov 2025 14:07:13 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-061518
name: pause-061518
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-061518
user:
client-certificate: /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/pause-061518/client.crt
client-key: /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/pause-061518/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-143736

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-143736"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-143736"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-143736"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-143736"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-143736"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-143736"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-143736"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-143736"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-143736"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-143736"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-143736"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-143736"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-143736"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-143736"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-143736"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-143736"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-143736"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-143736"

                                                
                                                
----------------------- debugLogs end: false-143736 [took: 3.499497595s] --------------------------------
helpers_test.go:175: Cleaning up "false-143736" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-143736
--- PASS: TestNetworkPlugins/group/false (3.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (61.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-873713 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-873713 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (1m1.769333965s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (61.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-873713 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [96331336-0f9f-4a4a-aecf-1aac5a7191da] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [96331336-0f9f-4a4a-aecf-1aac5a7191da] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003857258s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-873713 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-873713 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-873713 --alsologtostderr -v=3: (12.078373349s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-873713 -n old-k8s-version-873713
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-873713 -n old-k8s-version-873713: exit status 7 (74.11917ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-873713 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (53.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-873713 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
E1102 14:10:42.588567  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:10:53.196052  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/functional-082350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:10:59.511115  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-873713 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (52.804593211s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-873713 -n old-k8s-version-873713
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (53.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-7nd7h" [176cf84b-bc2d-4f64-9bd0-b6375d4daaa5] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004203088s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-7nd7h" [176cf84b-bc2d-4f64-9bd0-b6375d4daaa5] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003355041s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-873713 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-873713 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (63.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-150469 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1102 14:12:50.129013  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/functional-082350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-150469 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m3.429527728s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (63.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-150469 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e212db07-4bb6-4dba-8d5f-2fd867c01398] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e212db07-4bb6-4dba-8d5f-2fd867c01398] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004063821s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-150469 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-150469 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-150469 --alsologtostderr -v=3: (12.00790514s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-150469 -n no-preload-150469
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-150469 -n no-preload-150469: exit status 7 (90.022272ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-150469 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (55.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-150469 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-150469 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (54.760482679s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-150469 -n no-preload-150469
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (55.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (86.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-955646 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-955646 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m26.405436737s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (86.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-px4fq" [557a2e1b-92a0-46f3-8447-2e65294b752c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003926184s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-px4fq" [557a2e1b-92a0-46f3-8447-2e65294b752c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004117717s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-150469 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-150469 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-786183 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-786183 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m23.297748375s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-955646 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a6c0c8bf-d346-459c-bd70-f9b18f1f6a71] Pending
helpers_test.go:352: "busybox" [a6c0c8bf-d346-459c-bd70-f9b18f1f6a71] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a6c0c8bf-d346-459c-bd70-f9b18f1f6a71] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004209732s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-955646 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-955646 --alsologtostderr -v=3
E1102 14:15:12.104889  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:15:12.111252  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:15:12.123132  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:15:12.144660  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:15:12.186042  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:15:12.267627  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:15:12.429115  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:15:12.750450  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:15:13.397521  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:15:14.679842  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:15:17.242028  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:15:22.363787  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-955646 --alsologtostderr -v=3: (12.041300547s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-955646 -n embed-certs-955646
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-955646 -n embed-certs-955646: exit status 7 (71.246104ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-955646 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (54.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-955646 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1102 14:15:32.605871  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:15:53.087183  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-955646 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (54.46441288s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-955646 -n embed-certs-955646
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (54.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-786183 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [948bb4bd-a717-4efb-ab1a-c2f257304113] Pending
E1102 14:15:59.511457  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [948bb4bd-a717-4efb-ab1a-c2f257304113] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [948bb4bd-a717-4efb-ab1a-c2f257304113] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004188734s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-786183 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-786183 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-786183 --alsologtostderr -v=3: (12.043747979s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hp5zz" [d15ea483-7871-4164-8ff8-b05f21829b23] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.024660889s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-786183 -n default-k8s-diff-port-786183
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-786183 -n default-k8s-diff-port-786183: exit status 7 (74.103598ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-786183 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (56.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-786183 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-786183 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (56.12370901s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-786183 -n default-k8s-diff-port-786183
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (56.68s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hp5zz" [d15ea483-7871-4164-8ff8-b05f21829b23] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005066307s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-955646 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-955646 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (44.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-352233 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-352233 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (44.577611254s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (44.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-b2vjd" [a6ab03d5-ff18-456d-9305-69166308109a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004250973s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-b2vjd" [a6ab03d5-ff18-456d-9305-69166308109a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.016780319s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-786183 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-352233 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-352233 --alsologtostderr -v=3: (1.362047387s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-352233 -n newest-cni-352233
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-352233 -n newest-cni-352233: exit status 7 (73.436339ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-352233 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (19.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-352233 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-352233 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (18.891902146s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-352233 -n newest-cni-352233
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (19.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-786183 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (84.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-143736 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-143736 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m24.249702297s)
--- PASS: TestNetworkPlugins/group/auto/Start (84.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-352233 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (85.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-143736 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1102 14:18:04.320032  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:18:14.561551  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:18:35.042813  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-143736 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m25.381968799s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (85.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-143736 "pgrep -a kubelet"
I1102 14:19:05.176489  295174 config.go:182] Loaded profile config "auto-143736": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-143736 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ph4bd" [2bc9c42e-8396-4cfe-b073-0fdcf6d8b3fd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ph4bd" [2bc9c42e-8396-4cfe-b073-0fdcf6d8b3fd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003531272s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-143736 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-143736 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-143736 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-ttq75" [9a819557-c84f-43c5-a8db-e22fd78e5d31] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003535333s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-143736 "pgrep -a kubelet"
I1102 14:19:31.544327  295174 config.go:182] Loaded profile config "kindnet-143736": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-143736 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lpb2w" [19f0acd1-5b97-42d9-abcd-cf6bd827744f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-lpb2w" [19f0acd1-5b97-42d9-abcd-cf6bd827744f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004528934s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (71.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-143736 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-143736 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m11.237642697s)
--- PASS: TestNetworkPlugins/group/calico/Start (71.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-143736 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-143736 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-143736 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (72.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-143736 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
E1102 14:20:12.105257  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:20:37.928890  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:20:39.816342  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/old-k8s-version-873713/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-143736 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m12.054790142s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (72.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-hbc78" [65efad73-b3fe-4a63-9b5f-a38f3dec1c1e] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-hbc78" [65efad73-b3fe-4a63-9b5f-a38f3dec1c1e] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00387013s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-143736 "pgrep -a kubelet"
I1102 14:20:55.862299  295174 config.go:182] Loaded profile config "calico-143736": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-143736 replace --force -f testdata/netcat-deployment.yaml
I1102 14:20:56.358185  295174 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fjn8n" [40ec7bc3-4e29-425e-a9f7-8b51ab6fa720] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1102 14:20:58.781973  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:20:58.788279  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:20:58.799647  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:20:58.821044  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:20:58.862475  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:20:58.943796  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:20:59.105707  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:20:59.427293  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:20:59.510692  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/addons-230560/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:21:00.069561  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:21:01.350888  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-fjn8n" [40ec7bc3-4e29-425e-a9f7-8b51ab6fa720] Running
E1102 14:21:03.912439  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.004219764s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-143736 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-143736 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-143736 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-143736 "pgrep -a kubelet"
I1102 14:21:21.647087  295174 config.go:182] Loaded profile config "custom-flannel-143736": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (13.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-143736 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kmnzg" [20e66f5e-6165-44f7-9a89-561599011dea] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-kmnzg" [20e66f5e-6165-44f7-9a89-561599011dea] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 13.003825702s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (13.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (85.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-143736 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-143736 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m25.157447209s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (85.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-143736 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-143736 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-143736 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (61.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-143736 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E1102 14:22:20.718758  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:22:50.129737  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/functional-082350/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1102 14:22:54.066257  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/no-preload-150469/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-143736 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (1m1.616796158s)
--- PASS: TestNetworkPlugins/group/flannel/Start (61.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-143736 "pgrep -a kubelet"
I1102 14:22:58.078260  295174 config.go:182] Loaded profile config "enable-default-cni-143736": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-143736 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-wspj8" [38796917-d3c2-40a1-ab49-5597ed5622b0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-wspj8" [38796917-d3c2-40a1-ab49-5597ed5622b0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004007807s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-6xnx9" [0827b4a2-eb70-4808-a459-b8871d802cef] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.008682979s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-143736 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-143736 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-143736 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
I1102 14:23:09.710059  295174 config.go:182] Loaded profile config "flannel-143736": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-143736 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kptqf" [8809dd97-3bcf-40a6-9f40-1fe949409b76] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-kptqf" [8809dd97-3bcf-40a6-9f40-1fe949409b76] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004044668s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-143736 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-143736 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-143736 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-143736 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (85.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-143736 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1102 14:23:42.640963  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/default-k8s-diff-port-786183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-143736 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m25.134357038s)
--- PASS: TestNetworkPlugins/group/bridge/Start (85.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-143736 "pgrep -a kubelet"
I1102 14:24:57.226215  295174 config.go:182] Loaded profile config "bridge-143736": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-143736 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qtqzd" [0e86ce96-d579-4629-a037-109753bb5f9a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-qtqzd" [0e86ce96-d579-4629-a037-109753bb5f9a] Running
E1102 14:25:06.163563  295174 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/kindnet-143736/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003182282s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-143736 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-143736 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-143736 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (31/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.45s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-513487 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-513487" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-513487
--- SKIP: TestDownloadOnlyKic (0.45s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:35: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-720030" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-720030
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-143736 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-143736

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-143736

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-143736

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-143736

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-143736

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-143736

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-143736

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-143736

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-143736

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-143736

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-143736"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-143736"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-143736"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-143736

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-143736"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-143736"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-143736" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-143736" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-143736" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-143736" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-143736" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-143736" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-143736" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-143736" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-143736"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-143736"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-143736"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-143736"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-143736"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-143736" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-143736" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-143736" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-143736"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-143736"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-143736"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-143736"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-143736"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 02 Nov 2025 14:07:13 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-061518
contexts:
- context:
cluster: pause-061518
extensions:
- extension:
last-update: Sun, 02 Nov 2025 14:07:13 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-061518
name: pause-061518
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-061518
user:
client-certificate: /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/pause-061518/client.crt
client-key: /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/pause-061518/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-143736

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-143736"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-143736"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-143736"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-143736"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-143736"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-143736"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-143736"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-143736"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-143736"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-143736"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-143736"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-143736"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-143736"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-143736"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-143736"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-143736"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-143736"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-143736"

                                                
                                                
----------------------- debugLogs end: kubenet-143736 [took: 3.468446721s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-143736" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-143736
--- SKIP: TestNetworkPlugins/group/kubenet (3.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-143736 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-143736

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-143736

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-143736

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-143736

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-143736

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-143736

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-143736

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-143736

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-143736

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-143736

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143736"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143736"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143736"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-143736

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143736"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143736"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-143736" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-143736" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-143736" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-143736" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-143736" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-143736" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-143736" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-143736" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143736"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143736"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143736"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143736"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143736"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-143736

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-143736

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-143736" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-143736" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-143736

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-143736

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-143736" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-143736" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-143736" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-143736" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-143736" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143736"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143736"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143736"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143736"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143736"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21808-293314/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 02 Nov 2025 14:07:13 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-061518
contexts:
- context:
cluster: pause-061518
extensions:
- extension:
last-update: Sun, 02 Nov 2025 14:07:13 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-061518
name: pause-061518
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-061518
user:
client-certificate: /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/pause-061518/client.crt
client-key: /home/jenkins/minikube-integration/21808-293314/.minikube/profiles/pause-061518/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-143736

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143736"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143736"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143736"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143736"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143736"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143736"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143736"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143736"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143736"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143736"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143736"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143736"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143736"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143736"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143736"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143736"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143736"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-143736" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-143736"

                                                
                                                
----------------------- debugLogs end: cilium-143736 [took: 5.150294596s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-143736" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-143736
--- SKIP: TestNetworkPlugins/group/cilium (5.38s)

                                                
                                    
Copied to clipboard