Test Report: Docker_Linux_crio 21801

                    
                      3dc60e2e5dc0007721440fd051e7cba5635b79e7:2025-10-27:42091
                    
                

Test fail (37/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.28
35 TestAddons/parallel/Registry 13.04
36 TestAddons/parallel/RegistryCreds 0.43
37 TestAddons/parallel/Ingress 148.91
38 TestAddons/parallel/InspektorGadget 6.26
39 TestAddons/parallel/MetricsServer 5.32
41 TestAddons/parallel/CSI 32.29
42 TestAddons/parallel/Headlamp 2.96
43 TestAddons/parallel/CloudSpanner 5.29
44 TestAddons/parallel/LocalPath 8.17
45 TestAddons/parallel/NvidiaDevicePlugin 5.26
46 TestAddons/parallel/Yakd 5.26
47 TestAddons/parallel/AmdGpuDevicePlugin 5.26
97 TestFunctional/parallel/ServiceCmdConnect 603.02
114 TestFunctional/parallel/ServiceCmd/DeployApp 600.66
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.92
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.92
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.32
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.32
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.21
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.37
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.55
153 TestFunctional/parallel/ServiceCmd/Format 0.56
154 TestFunctional/parallel/ServiceCmd/URL 0.56
191 TestJSONOutput/pause/Command 2.34
197 TestJSONOutput/unpause/Command 1.77
261 TestPause/serial/Pause 6.62
298 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.76
305 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.83
313 TestStartStop/group/old-k8s-version/serial/Pause 7.18
314 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.73
323 TestStartStop/group/embed-certs/serial/Pause 6.47
327 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 3.31
334 TestStartStop/group/no-preload/serial/Pause 6.18
336 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.22
344 TestStartStop/group/newest-cni/serial/Pause 6.34
350 TestStartStop/group/default-k8s-diff-port/serial/Pause 7.64
x
+
TestAddons/serial/Volcano (0.28s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-589824 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-589824 addons disable volcano --alsologtostderr -v=1: exit status 11 (278.580664ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 18:59:00.542742  366015 out.go:360] Setting OutFile to fd 1 ...
	I1027 18:59:00.543058  366015 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:59:00.543068  366015 out.go:374] Setting ErrFile to fd 2...
	I1027 18:59:00.543073  366015 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:59:00.543326  366015 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 18:59:00.543644  366015 mustload.go:65] Loading cluster: addons-589824
	I1027 18:59:00.544024  366015 config.go:182] Loaded profile config "addons-589824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:59:00.544041  366015 addons.go:606] checking whether the cluster is paused
	I1027 18:59:00.544124  366015 config.go:182] Loaded profile config "addons-589824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:59:00.544160  366015 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:59:00.544586  366015 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:59:00.565032  366015 ssh_runner.go:195] Run: systemctl --version
	I1027 18:59:00.565119  366015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:59:00.585490  366015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:59:00.688640  366015 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 18:59:00.688748  366015 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 18:59:00.721510  366015 cri.go:89] found id: "0a17a4745cc1a6104ea6432d9fd60dac6e6abe764b5d1330d69426fa0b74a6ab"
	I1027 18:59:00.721532  366015 cri.go:89] found id: "a30f678907200483df6ff7630d767bc8daa14ce81d7f9088b61ad45ee3d0afab"
	I1027 18:59:00.721536  366015 cri.go:89] found id: "db7343377b38897cf4a8cf603f6e486663fecd5587924e1ed818db6d54bdcce6"
	I1027 18:59:00.721539  366015 cri.go:89] found id: "71e53e748e01fc8c91ffa4fb8b7865bea26bcbe65dcba958949295c6f0037da7"
	I1027 18:59:00.721542  366015 cri.go:89] found id: "56024f3c5df317e559a2fc01d91706e2a21e755612591d33569756c8b235a739"
	I1027 18:59:00.721545  366015 cri.go:89] found id: "ef768854ff28223563c69a32d2834fab10262b7e6a6963c625600582d59b9e51"
	I1027 18:59:00.721548  366015 cri.go:89] found id: "76e187a2847661d9eb59daefd89617bc458e7238cd87c5b6b4e6c6f1884d4826"
	I1027 18:59:00.721551  366015 cri.go:89] found id: "0c23d9067a021958f6e78dae17e3e314bb8f01a59a277d6d231a1c91ac243402"
	I1027 18:59:00.721553  366015 cri.go:89] found id: "6feb37f12d4a362a4be9862cfb4d525092b27f5c8806b5fe7f3e6992e40865b1"
	I1027 18:59:00.721580  366015 cri.go:89] found id: "2dc898f8fa5b3f56f21afaa0584bf9b0ee67ad474e08c141d382bf6352ffb103"
	I1027 18:59:00.721587  366015 cri.go:89] found id: "27f1c94c3f5736bca109359ef14c6315dca30f3a92e432a313912785f638d339"
	I1027 18:59:00.721591  366015 cri.go:89] found id: "b7494b1ab076bec5211fe9aa45d869fd06dce709b51652f81a21756c0087c5dc"
	I1027 18:59:00.721595  366015 cri.go:89] found id: "2f642c7cbe9094287b843be457ec991af2d6a4e3a7c89d0cef2628b88a0df390"
	I1027 18:59:00.721599  366015 cri.go:89] found id: "ca7a93241189c56d1808a8b7fb428d8057429bed2f6554b65716f5aeecd49b88"
	I1027 18:59:00.721603  366015 cri.go:89] found id: "2095fff76306861533792ed7f54dec0997d67f3656557a857ff7af3b00429cda"
	I1027 18:59:00.721609  366015 cri.go:89] found id: "eede6880efbc9e505b955efd78f6cc85e44d1edb5f142fe3df44034a4341a14f"
	I1027 18:59:00.721616  366015 cri.go:89] found id: "ba1ddd191addfbafb743bfd31989a110bd5b0f58f7479075c129e528745e7798"
	I1027 18:59:00.721625  366015 cri.go:89] found id: "abbe027d3dc3b813b338a56e8cabab82e03eb9b112b7b850abb79fefe6d06ad7"
	I1027 18:59:00.721629  366015 cri.go:89] found id: "12e10d7e88fff07d51f12a561be95b0933cdc57cc59e0f478fe8964c53f1806b"
	I1027 18:59:00.721632  366015 cri.go:89] found id: "6d05a2b6be1fb2b8475a215eb50681a592a20257978b9da0091741666c9fa5c6"
	I1027 18:59:00.721642  366015 cri.go:89] found id: "c02f8fc8e6a7392b824780b7cf27bac4f0cee905aafadcc2295bf2775ce85316"
	I1027 18:59:00.721649  366015 cri.go:89] found id: "95468d8526baeb9ed07c582a77c3593017052fb17f3ce84741a67f91794b7400"
	I1027 18:59:00.721652  366015 cri.go:89] found id: "81cd0a11514aba345e443fd708bb0a4b65a29f336aec8643a57037ceeda8aefe"
	I1027 18:59:00.721655  366015 cri.go:89] found id: "f25d173d59b5ba978f27e915fc30ff6e02ab5bba952c2af598b464a59edc1987"
	I1027 18:59:00.721657  366015 cri.go:89] found id: ""
	I1027 18:59:00.721705  366015 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 18:59:00.737863  366015 out.go:203] 
	W1027 18:59:00.739563  366015 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T18:59:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T18:59:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 18:59:00.739596  366015 out.go:285] * 
	* 
	W1027 18:59:00.743794  366015 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 18:59:00.745621  366015 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-589824 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.28s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.04s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.819351ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-bvh6h" [3922e9b1-ef70-4fce-b650-f88d2755f9ab] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003748031s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-62t66" [05d41077-cfc6-442d-baee-0103823e1b16] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002862631s
addons_test.go:392: (dbg) Run:  kubectl --context addons-589824 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-589824 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-589824 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.559092091s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-589824 ip
2025/10/27 18:59:22 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-589824 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-589824 addons disable registry --alsologtostderr -v=1: exit status 11 (264.203539ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 18:59:22.408921  368550 out.go:360] Setting OutFile to fd 1 ...
	I1027 18:59:22.409091  368550 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:59:22.409105  368550 out.go:374] Setting ErrFile to fd 2...
	I1027 18:59:22.409111  368550 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:59:22.409428  368550 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 18:59:22.409849  368550 mustload.go:65] Loading cluster: addons-589824
	I1027 18:59:22.410382  368550 config.go:182] Loaded profile config "addons-589824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:59:22.410405  368550 addons.go:606] checking whether the cluster is paused
	I1027 18:59:22.410543  368550 config.go:182] Loaded profile config "addons-589824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:59:22.410567  368550 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:59:22.411170  368550 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:59:22.432492  368550 ssh_runner.go:195] Run: systemctl --version
	I1027 18:59:22.432578  368550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:59:22.453360  368550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:59:22.554197  368550 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 18:59:22.554289  368550 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 18:59:22.585112  368550 cri.go:89] found id: "0a17a4745cc1a6104ea6432d9fd60dac6e6abe764b5d1330d69426fa0b74a6ab"
	I1027 18:59:22.585147  368550 cri.go:89] found id: "a30f678907200483df6ff7630d767bc8daa14ce81d7f9088b61ad45ee3d0afab"
	I1027 18:59:22.585153  368550 cri.go:89] found id: "db7343377b38897cf4a8cf603f6e486663fecd5587924e1ed818db6d54bdcce6"
	I1027 18:59:22.585169  368550 cri.go:89] found id: "71e53e748e01fc8c91ffa4fb8b7865bea26bcbe65dcba958949295c6f0037da7"
	I1027 18:59:22.585173  368550 cri.go:89] found id: "56024f3c5df317e559a2fc01d91706e2a21e755612591d33569756c8b235a739"
	I1027 18:59:22.585178  368550 cri.go:89] found id: "ef768854ff28223563c69a32d2834fab10262b7e6a6963c625600582d59b9e51"
	I1027 18:59:22.585181  368550 cri.go:89] found id: "76e187a2847661d9eb59daefd89617bc458e7238cd87c5b6b4e6c6f1884d4826"
	I1027 18:59:22.585183  368550 cri.go:89] found id: "0c23d9067a021958f6e78dae17e3e314bb8f01a59a277d6d231a1c91ac243402"
	I1027 18:59:22.585188  368550 cri.go:89] found id: "6feb37f12d4a362a4be9862cfb4d525092b27f5c8806b5fe7f3e6992e40865b1"
	I1027 18:59:22.585194  368550 cri.go:89] found id: "2dc898f8fa5b3f56f21afaa0584bf9b0ee67ad474e08c141d382bf6352ffb103"
	I1027 18:59:22.585203  368550 cri.go:89] found id: "27f1c94c3f5736bca109359ef14c6315dca30f3a92e432a313912785f638d339"
	I1027 18:59:22.585205  368550 cri.go:89] found id: "b7494b1ab076bec5211fe9aa45d869fd06dce709b51652f81a21756c0087c5dc"
	I1027 18:59:22.585208  368550 cri.go:89] found id: "2f642c7cbe9094287b843be457ec991af2d6a4e3a7c89d0cef2628b88a0df390"
	I1027 18:59:22.585210  368550 cri.go:89] found id: "ca7a93241189c56d1808a8b7fb428d8057429bed2f6554b65716f5aeecd49b88"
	I1027 18:59:22.585213  368550 cri.go:89] found id: "2095fff76306861533792ed7f54dec0997d67f3656557a857ff7af3b00429cda"
	I1027 18:59:22.585217  368550 cri.go:89] found id: "eede6880efbc9e505b955efd78f6cc85e44d1edb5f142fe3df44034a4341a14f"
	I1027 18:59:22.585222  368550 cri.go:89] found id: "ba1ddd191addfbafb743bfd31989a110bd5b0f58f7479075c129e528745e7798"
	I1027 18:59:22.585225  368550 cri.go:89] found id: "abbe027d3dc3b813b338a56e8cabab82e03eb9b112b7b850abb79fefe6d06ad7"
	I1027 18:59:22.585228  368550 cri.go:89] found id: "12e10d7e88fff07d51f12a561be95b0933cdc57cc59e0f478fe8964c53f1806b"
	I1027 18:59:22.585230  368550 cri.go:89] found id: "6d05a2b6be1fb2b8475a215eb50681a592a20257978b9da0091741666c9fa5c6"
	I1027 18:59:22.585235  368550 cri.go:89] found id: "c02f8fc8e6a7392b824780b7cf27bac4f0cee905aafadcc2295bf2775ce85316"
	I1027 18:59:22.585237  368550 cri.go:89] found id: "95468d8526baeb9ed07c582a77c3593017052fb17f3ce84741a67f91794b7400"
	I1027 18:59:22.585240  368550 cri.go:89] found id: "81cd0a11514aba345e443fd708bb0a4b65a29f336aec8643a57037ceeda8aefe"
	I1027 18:59:22.585242  368550 cri.go:89] found id: "f25d173d59b5ba978f27e915fc30ff6e02ab5bba952c2af598b464a59edc1987"
	I1027 18:59:22.585244  368550 cri.go:89] found id: ""
	I1027 18:59:22.585293  368550 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 18:59:22.600447  368550 out.go:203] 
	W1027 18:59:22.601780  368550 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T18:59:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T18:59:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 18:59:22.601809  368550 out.go:285] * 
	* 
	W1027 18:59:22.606320  368550 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 18:59:22.607657  368550 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-589824 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (13.04s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.43s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.089977ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-589824
addons_test.go:332: (dbg) Run:  kubectl --context addons-589824 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-589824 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-589824 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (260.084163ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 18:59:23.281344  368754 out.go:360] Setting OutFile to fd 1 ...
	I1027 18:59:23.281621  368754 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:59:23.281630  368754 out.go:374] Setting ErrFile to fd 2...
	I1027 18:59:23.281635  368754 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:59:23.281831  368754 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 18:59:23.282100  368754 mustload.go:65] Loading cluster: addons-589824
	I1027 18:59:23.282464  368754 config.go:182] Loaded profile config "addons-589824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:59:23.282487  368754 addons.go:606] checking whether the cluster is paused
	I1027 18:59:23.282569  368754 config.go:182] Loaded profile config "addons-589824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:59:23.282585  368754 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:59:23.282977  368754 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:59:23.301048  368754 ssh_runner.go:195] Run: systemctl --version
	I1027 18:59:23.301105  368754 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:59:23.318977  368754 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:59:23.418571  368754 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 18:59:23.418651  368754 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 18:59:23.453838  368754 cri.go:89] found id: "0a17a4745cc1a6104ea6432d9fd60dac6e6abe764b5d1330d69426fa0b74a6ab"
	I1027 18:59:23.453864  368754 cri.go:89] found id: "a30f678907200483df6ff7630d767bc8daa14ce81d7f9088b61ad45ee3d0afab"
	I1027 18:59:23.453870  368754 cri.go:89] found id: "db7343377b38897cf4a8cf603f6e486663fecd5587924e1ed818db6d54bdcce6"
	I1027 18:59:23.453875  368754 cri.go:89] found id: "71e53e748e01fc8c91ffa4fb8b7865bea26bcbe65dcba958949295c6f0037da7"
	I1027 18:59:23.453879  368754 cri.go:89] found id: "56024f3c5df317e559a2fc01d91706e2a21e755612591d33569756c8b235a739"
	I1027 18:59:23.453883  368754 cri.go:89] found id: "ef768854ff28223563c69a32d2834fab10262b7e6a6963c625600582d59b9e51"
	I1027 18:59:23.453887  368754 cri.go:89] found id: "76e187a2847661d9eb59daefd89617bc458e7238cd87c5b6b4e6c6f1884d4826"
	I1027 18:59:23.453891  368754 cri.go:89] found id: "0c23d9067a021958f6e78dae17e3e314bb8f01a59a277d6d231a1c91ac243402"
	I1027 18:59:23.453895  368754 cri.go:89] found id: "6feb37f12d4a362a4be9862cfb4d525092b27f5c8806b5fe7f3e6992e40865b1"
	I1027 18:59:23.453915  368754 cri.go:89] found id: "2dc898f8fa5b3f56f21afaa0584bf9b0ee67ad474e08c141d382bf6352ffb103"
	I1027 18:59:23.453923  368754 cri.go:89] found id: "27f1c94c3f5736bca109359ef14c6315dca30f3a92e432a313912785f638d339"
	I1027 18:59:23.453927  368754 cri.go:89] found id: "b7494b1ab076bec5211fe9aa45d869fd06dce709b51652f81a21756c0087c5dc"
	I1027 18:59:23.453931  368754 cri.go:89] found id: "2f642c7cbe9094287b843be457ec991af2d6a4e3a7c89d0cef2628b88a0df390"
	I1027 18:59:23.453935  368754 cri.go:89] found id: "ca7a93241189c56d1808a8b7fb428d8057429bed2f6554b65716f5aeecd49b88"
	I1027 18:59:23.453939  368754 cri.go:89] found id: "2095fff76306861533792ed7f54dec0997d67f3656557a857ff7af3b00429cda"
	I1027 18:59:23.453948  368754 cri.go:89] found id: "eede6880efbc9e505b955efd78f6cc85e44d1edb5f142fe3df44034a4341a14f"
	I1027 18:59:23.453955  368754 cri.go:89] found id: "ba1ddd191addfbafb743bfd31989a110bd5b0f58f7479075c129e528745e7798"
	I1027 18:59:23.453961  368754 cri.go:89] found id: "abbe027d3dc3b813b338a56e8cabab82e03eb9b112b7b850abb79fefe6d06ad7"
	I1027 18:59:23.453965  368754 cri.go:89] found id: "12e10d7e88fff07d51f12a561be95b0933cdc57cc59e0f478fe8964c53f1806b"
	I1027 18:59:23.453969  368754 cri.go:89] found id: "6d05a2b6be1fb2b8475a215eb50681a592a20257978b9da0091741666c9fa5c6"
	I1027 18:59:23.453973  368754 cri.go:89] found id: "c02f8fc8e6a7392b824780b7cf27bac4f0cee905aafadcc2295bf2775ce85316"
	I1027 18:59:23.453976  368754 cri.go:89] found id: "95468d8526baeb9ed07c582a77c3593017052fb17f3ce84741a67f91794b7400"
	I1027 18:59:23.453980  368754 cri.go:89] found id: "81cd0a11514aba345e443fd708bb0a4b65a29f336aec8643a57037ceeda8aefe"
	I1027 18:59:23.453990  368754 cri.go:89] found id: "f25d173d59b5ba978f27e915fc30ff6e02ab5bba952c2af598b464a59edc1987"
	I1027 18:59:23.453997  368754 cri.go:89] found id: ""
	I1027 18:59:23.454046  368754 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 18:59:23.471611  368754 out.go:203] 
	W1027 18:59:23.473383  368754 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T18:59:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T18:59:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 18:59:23.473413  368754 out.go:285] * 
	* 
	W1027 18:59:23.478212  368754 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 18:59:23.480005  368754 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-589824 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.43s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (148.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-589824 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-589824 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-589824 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [4e8f6ee2-441e-480b-93e3-44362001a683] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [4e8f6ee2-441e-480b-93e3-44362001a683] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003604875s
I1027 18:59:19.063910  356415 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-589824 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-589824 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m16.022683615s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-589824 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-589824 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-589824
helpers_test.go:243: (dbg) docker inspect addons-589824:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e8c54cb73f3e55728ce78fff23ac7684832dac9f004ce7ccac5dd5b0c7d3b97",
	        "Created": "2025-10-27T18:56:51.416282482Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 358388,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T18:56:51.459857133Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/5e8c54cb73f3e55728ce78fff23ac7684832dac9f004ce7ccac5dd5b0c7d3b97/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e8c54cb73f3e55728ce78fff23ac7684832dac9f004ce7ccac5dd5b0c7d3b97/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e8c54cb73f3e55728ce78fff23ac7684832dac9f004ce7ccac5dd5b0c7d3b97/hosts",
	        "LogPath": "/var/lib/docker/containers/5e8c54cb73f3e55728ce78fff23ac7684832dac9f004ce7ccac5dd5b0c7d3b97/5e8c54cb73f3e55728ce78fff23ac7684832dac9f004ce7ccac5dd5b0c7d3b97-json.log",
	        "Name": "/addons-589824",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-589824:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-589824",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5e8c54cb73f3e55728ce78fff23ac7684832dac9f004ce7ccac5dd5b0c7d3b97",
	                "LowerDir": "/var/lib/docker/overlay2/7a1c62e1076931169f4e0035676ea65cefb8158f580ae1df1de805bd9d2f5b0e-init/diff:/var/lib/docker/overlay2/71b61ec94610a35f2d924dec358052d4c154c36b3fe219802f60246ca2dc7f45/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7a1c62e1076931169f4e0035676ea65cefb8158f580ae1df1de805bd9d2f5b0e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7a1c62e1076931169f4e0035676ea65cefb8158f580ae1df1de805bd9d2f5b0e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7a1c62e1076931169f4e0035676ea65cefb8158f580ae1df1de805bd9d2f5b0e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-589824",
	                "Source": "/var/lib/docker/volumes/addons-589824/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-589824",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-589824",
	                "name.minikube.sigs.k8s.io": "addons-589824",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1734ac03b580fc3c16a76cfde6d1b73cbf9f1cc3cf72fde094a751e347b7a8f2",
	            "SandboxKey": "/var/run/docker/netns/1734ac03b580",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-589824": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:dd:e6:c9:41:47",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c1d8cd130a9fb2cf4b671833f0a9d4c3a761289bf1eb7fb6eccc22d089789656",
	                    "EndpointID": "a98e1ca98456234c857bf29aa3881b3e59fdeea16a1f3a385e5d07683786423f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-589824",
	                        "5e8c54cb73f3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-589824 -n addons-589824
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-589824 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-589824 logs -n 25: (1.292894265s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-394940 --alsologtostderr --binary-mirror http://127.0.0.1:39569 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-394940 │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │                     │
	│ delete  │ -p binary-mirror-394940                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-394940 │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:56 UTC │
	│ addons  │ enable dashboard -p addons-589824                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-589824        │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │                     │
	│ addons  │ disable dashboard -p addons-589824                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-589824        │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │                     │
	│ start   │ -p addons-589824 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-589824        │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:59 UTC │
	│ addons  │ addons-589824 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-589824        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │                     │
	│ addons  │ addons-589824 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-589824        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │                     │
	│ addons  │ enable headlamp -p addons-589824 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-589824        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │                     │
	│ addons  │ addons-589824 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-589824        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │                     │
	│ addons  │ addons-589824 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-589824        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │                     │
	│ addons  │ addons-589824 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-589824        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │                     │
	│ ssh     │ addons-589824 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-589824        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │                     │
	│ addons  │ addons-589824 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-589824        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │                     │
	│ ip      │ addons-589824 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-589824        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │ 27 Oct 25 18:59 UTC │
	│ addons  │ addons-589824 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-589824        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │                     │
	│ addons  │ addons-589824 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-589824        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-589824                                                                                                                                                                                                                                                                                                                                                                                           │ addons-589824        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │ 27 Oct 25 18:59 UTC │
	│ addons  │ addons-589824 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-589824        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │                     │
	│ ssh     │ addons-589824 ssh cat /opt/local-path-provisioner/pvc-d2b921e4-c965-436a-9594-13b4f6318e7a_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-589824        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │ 27 Oct 25 18:59 UTC │
	│ addons  │ addons-589824 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-589824        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │                     │
	│ addons  │ addons-589824 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-589824        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │                     │
	│ addons  │ addons-589824 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-589824        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │                     │
	│ addons  │ addons-589824 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-589824        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │                     │
	│ addons  │ addons-589824 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-589824        │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │                     │
	│ ip      │ addons-589824 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-589824        │ jenkins │ v1.37.0 │ 27 Oct 25 19:01 UTC │ 27 Oct 25 19:01 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 18:56:27
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 18:56:27.976251  357750 out.go:360] Setting OutFile to fd 1 ...
	I1027 18:56:27.976510  357750 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:56:27.976519  357750 out.go:374] Setting ErrFile to fd 2...
	I1027 18:56:27.976523  357750 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:56:27.976745  357750 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 18:56:27.977380  357750 out.go:368] Setting JSON to false
	I1027 18:56:27.978365  357750 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5937,"bootTime":1761585451,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 18:56:27.978492  357750 start.go:141] virtualization: kvm guest
	I1027 18:56:27.980773  357750 out.go:179] * [addons-589824] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 18:56:27.982595  357750 notify.go:220] Checking for updates...
	I1027 18:56:27.982657  357750 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 18:56:27.984498  357750 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 18:56:27.986301  357750 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 18:56:27.988002  357750 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-352833/.minikube
	I1027 18:56:27.989590  357750 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 18:56:27.991298  357750 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 18:56:27.992936  357750 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 18:56:28.019056  357750 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 18:56:28.019217  357750 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 18:56:28.081197  357750 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-27 18:56:28.069443711 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 18:56:28.081316  357750 docker.go:318] overlay module found
	I1027 18:56:28.083328  357750 out.go:179] * Using the docker driver based on user configuration
	I1027 18:56:28.084803  357750 start.go:305] selected driver: docker
	I1027 18:56:28.084825  357750 start.go:925] validating driver "docker" against <nil>
	I1027 18:56:28.084840  357750 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 18:56:28.085479  357750 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 18:56:28.142806  357750 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-27 18:56:28.131806595 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 18:56:28.143012  357750 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1027 18:56:28.143307  357750 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 18:56:28.145181  357750 out.go:179] * Using Docker driver with root privileges
	I1027 18:56:28.146426  357750 cni.go:84] Creating CNI manager for ""
	I1027 18:56:28.146526  357750 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 18:56:28.146543  357750 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 18:56:28.146629  357750 start.go:349] cluster config:
	{Name:addons-589824 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-589824 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1027 18:56:28.148061  357750 out.go:179] * Starting "addons-589824" primary control-plane node in "addons-589824" cluster
	I1027 18:56:28.149190  357750 cache.go:123] Beginning downloading kic base image for docker with crio
	I1027 18:56:28.150597  357750 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 18:56:28.151677  357750 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 18:56:28.151752  357750 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1027 18:56:28.151768  357750 cache.go:58] Caching tarball of preloaded images
	I1027 18:56:28.151807  357750 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 18:56:28.151888  357750 preload.go:233] Found /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1027 18:56:28.151901  357750 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 18:56:28.152336  357750 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/config.json ...
	I1027 18:56:28.152375  357750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/config.json: {Name:mk83a19f7e07d3485c6fbc0c6bc6309f2d56d02c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:28.170858  357750 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1027 18:56:28.171022  357750 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1027 18:56:28.171043  357750 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1027 18:56:28.171050  357750 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1027 18:56:28.171057  357750 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1027 18:56:28.171064  357750 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1027 18:56:40.152523  357750 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1027 18:56:40.152564  357750 cache.go:232] Successfully downloaded all kic artifacts
	I1027 18:56:40.152636  357750 start.go:360] acquireMachinesLock for addons-589824: {Name:mk5322ac57c0e3174bcd3aab61f07a516429abf5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 18:56:40.152775  357750 start.go:364] duration metric: took 108.825µs to acquireMachinesLock for "addons-589824"
	I1027 18:56:40.152811  357750 start.go:93] Provisioning new machine with config: &{Name:addons-589824 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-589824 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 18:56:40.152927  357750 start.go:125] createHost starting for "" (driver="docker")
	I1027 18:56:40.155179  357750 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1027 18:56:40.155479  357750 start.go:159] libmachine.API.Create for "addons-589824" (driver="docker")
	I1027 18:56:40.155521  357750 client.go:168] LocalClient.Create starting
	I1027 18:56:40.155691  357750 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem
	I1027 18:56:40.271089  357750 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem
	I1027 18:56:40.549554  357750 cli_runner.go:164] Run: docker network inspect addons-589824 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 18:56:40.568432  357750 cli_runner.go:211] docker network inspect addons-589824 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 18:56:40.568619  357750 network_create.go:284] running [docker network inspect addons-589824] to gather additional debugging logs...
	I1027 18:56:40.568660  357750 cli_runner.go:164] Run: docker network inspect addons-589824
	W1027 18:56:40.587026  357750 cli_runner.go:211] docker network inspect addons-589824 returned with exit code 1
	I1027 18:56:40.587063  357750 network_create.go:287] error running [docker network inspect addons-589824]: docker network inspect addons-589824: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-589824 not found
	I1027 18:56:40.587097  357750 network_create.go:289] output of [docker network inspect addons-589824]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-589824 not found
	
	** /stderr **
	I1027 18:56:40.587261  357750 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 18:56:40.606459  357750 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002018e80}
	I1027 18:56:40.606499  357750 network_create.go:124] attempt to create docker network addons-589824 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1027 18:56:40.606549  357750 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-589824 addons-589824
	I1027 18:56:40.666682  357750 network_create.go:108] docker network addons-589824 192.168.49.0/24 created
	I1027 18:56:40.666738  357750 kic.go:121] calculated static IP "192.168.49.2" for the "addons-589824" container
	I1027 18:56:40.666843  357750 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 18:56:40.683809  357750 cli_runner.go:164] Run: docker volume create addons-589824 --label name.minikube.sigs.k8s.io=addons-589824 --label created_by.minikube.sigs.k8s.io=true
	I1027 18:56:40.704325  357750 oci.go:103] Successfully created a docker volume addons-589824
	I1027 18:56:40.704419  357750 cli_runner.go:164] Run: docker run --rm --name addons-589824-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-589824 --entrypoint /usr/bin/test -v addons-589824:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 18:56:46.988977  357750 cli_runner.go:217] Completed: docker run --rm --name addons-589824-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-589824 --entrypoint /usr/bin/test -v addons-589824:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (6.284496545s)
	I1027 18:56:46.989016  357750 oci.go:107] Successfully prepared a docker volume addons-589824
	I1027 18:56:46.989049  357750 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 18:56:46.989077  357750 kic.go:194] Starting extracting preloaded images to volume ...
	I1027 18:56:46.989155  357750 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-589824:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1027 18:56:51.340410  357750 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-589824:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.351201594s)
	I1027 18:56:51.340459  357750 kic.go:203] duration metric: took 4.351378042s to extract preloaded images to volume ...
	W1027 18:56:51.340557  357750 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1027 18:56:51.340590  357750 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1027 18:56:51.340634  357750 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1027 18:56:51.399273  357750 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-589824 --name addons-589824 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-589824 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-589824 --network addons-589824 --ip 192.168.49.2 --volume addons-589824:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1027 18:56:51.685323  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Running}}
	I1027 18:56:51.704229  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:56:51.723106  357750 cli_runner.go:164] Run: docker exec addons-589824 stat /var/lib/dpkg/alternatives/iptables
	I1027 18:56:51.775121  357750 oci.go:144] the created container "addons-589824" has a running status.
	I1027 18:56:51.775161  357750 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa...
	I1027 18:56:52.482837  357750 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1027 18:56:52.509363  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:56:52.528091  357750 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1027 18:56:52.528115  357750 kic_runner.go:114] Args: [docker exec --privileged addons-589824 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1027 18:56:52.579991  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:56:52.598419  357750 machine.go:93] provisionDockerMachine start ...
	I1027 18:56:52.598547  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:56:52.617245  357750 main.go:141] libmachine: Using SSH client type: native
	I1027 18:56:52.617589  357750 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1027 18:56:52.617610  357750 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 18:56:52.760584  357750 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-589824
	
	I1027 18:56:52.760616  357750 ubuntu.go:182] provisioning hostname "addons-589824"
	I1027 18:56:52.760684  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:56:52.779752  357750 main.go:141] libmachine: Using SSH client type: native
	I1027 18:56:52.780051  357750 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1027 18:56:52.780074  357750 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-589824 && echo "addons-589824" | sudo tee /etc/hostname
	I1027 18:56:52.933129  357750 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-589824
	
	I1027 18:56:52.933224  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:56:52.951396  357750 main.go:141] libmachine: Using SSH client type: native
	I1027 18:56:52.951622  357750 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1027 18:56:52.951640  357750 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-589824' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-589824/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-589824' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 18:56:53.094170  357750 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 18:56:53.094204  357750 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21801-352833/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-352833/.minikube}
	I1027 18:56:53.094308  357750 ubuntu.go:190] setting up certificates
	I1027 18:56:53.094327  357750 provision.go:84] configureAuth start
	I1027 18:56:53.094397  357750 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-589824
	I1027 18:56:53.113130  357750 provision.go:143] copyHostCerts
	I1027 18:56:53.113230  357750 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem (1078 bytes)
	I1027 18:56:53.113362  357750 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem (1123 bytes)
	I1027 18:56:53.113425  357750 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem (1679 bytes)
	I1027 18:56:53.113481  357750 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem org=jenkins.addons-589824 san=[127.0.0.1 192.168.49.2 addons-589824 localhost minikube]
	I1027 18:56:53.306978  357750 provision.go:177] copyRemoteCerts
	I1027 18:56:53.307052  357750 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 18:56:53.307091  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:56:53.326763  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:56:53.430205  357750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 18:56:53.450230  357750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1027 18:56:53.467768  357750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 18:56:53.486406  357750 provision.go:87] duration metric: took 392.059607ms to configureAuth
	I1027 18:56:53.486438  357750 ubuntu.go:206] setting minikube options for container-runtime
	I1027 18:56:53.486604  357750 config.go:182] Loaded profile config "addons-589824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:56:53.486704  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:56:53.504933  357750 main.go:141] libmachine: Using SSH client type: native
	I1027 18:56:53.505191  357750 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1027 18:56:53.505211  357750 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 18:56:53.762641  357750 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 18:56:53.762670  357750 machine.go:96] duration metric: took 1.164205012s to provisionDockerMachine
	I1027 18:56:53.762684  357750 client.go:171] duration metric: took 13.607151259s to LocalClient.Create
	I1027 18:56:53.762709  357750 start.go:167] duration metric: took 13.607231373s to libmachine.API.Create "addons-589824"
	I1027 18:56:53.762719  357750 start.go:293] postStartSetup for "addons-589824" (driver="docker")
	I1027 18:56:53.762731  357750 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 18:56:53.762790  357750 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 18:56:53.762830  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:56:53.781365  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:56:53.885330  357750 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 18:56:53.889373  357750 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 18:56:53.889404  357750 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 18:56:53.889417  357750 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-352833/.minikube/addons for local assets ...
	I1027 18:56:53.889476  357750 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-352833/.minikube/files for local assets ...
	I1027 18:56:53.889499  357750 start.go:296] duration metric: took 126.774101ms for postStartSetup
	I1027 18:56:53.889796  357750 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-589824
	I1027 18:56:53.908118  357750 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/config.json ...
	I1027 18:56:53.908437  357750 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 18:56:53.908484  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:56:53.926041  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:56:54.024696  357750 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 18:56:54.029638  357750 start.go:128] duration metric: took 13.876689788s to createHost
	I1027 18:56:54.029730  357750 start.go:83] releasing machines lock for "addons-589824", held for 13.876878809s
	I1027 18:56:54.029835  357750 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-589824
	I1027 18:56:54.047873  357750 ssh_runner.go:195] Run: cat /version.json
	I1027 18:56:54.047905  357750 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 18:56:54.047923  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:56:54.048001  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:56:54.067393  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:56:54.067666  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:56:54.219891  357750 ssh_runner.go:195] Run: systemctl --version
	I1027 18:56:54.227001  357750 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 18:56:54.265699  357750 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 18:56:54.270727  357750 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 18:56:54.270808  357750 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 18:56:54.299409  357750 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1027 18:56:54.299437  357750 start.go:495] detecting cgroup driver to use...
	I1027 18:56:54.299475  357750 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 18:56:54.299535  357750 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 18:56:54.319330  357750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 18:56:54.332407  357750 docker.go:218] disabling cri-docker service (if available) ...
	I1027 18:56:54.332468  357750 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 18:56:54.349634  357750 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 18:56:54.368222  357750 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 18:56:54.451890  357750 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 18:56:54.544707  357750 docker.go:234] disabling docker service ...
	I1027 18:56:54.544772  357750 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 18:56:54.564677  357750 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 18:56:54.578425  357750 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 18:56:54.664330  357750 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 18:56:54.751537  357750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 18:56:54.765429  357750 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 18:56:54.780905  357750 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 18:56:54.780984  357750 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:56:54.792531  357750 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 18:56:54.792606  357750 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:56:54.802483  357750 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:56:54.812394  357750 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:56:54.822074  357750 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 18:56:54.831168  357750 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:56:54.840833  357750 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:56:54.855842  357750 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:56:54.865391  357750 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 18:56:54.873511  357750 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 18:56:54.881518  357750 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 18:56:54.961828  357750 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 18:56:55.073756  357750 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 18:56:55.073828  357750 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 18:56:55.078174  357750 start.go:563] Will wait 60s for crictl version
	I1027 18:56:55.078228  357750 ssh_runner.go:195] Run: which crictl
	I1027 18:56:55.082359  357750 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 18:56:55.110435  357750 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 18:56:55.110530  357750 ssh_runner.go:195] Run: crio --version
	I1027 18:56:55.139360  357750 ssh_runner.go:195] Run: crio --version
	I1027 18:56:55.169621  357750 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 18:56:55.171067  357750 cli_runner.go:164] Run: docker network inspect addons-589824 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 18:56:55.189273  357750 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1027 18:56:55.193853  357750 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 18:56:55.205241  357750 kubeadm.go:883] updating cluster {Name:addons-589824 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-589824 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 18:56:55.205421  357750 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 18:56:55.205479  357750 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 18:56:55.237795  357750 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 18:56:55.237819  357750 crio.go:433] Images already preloaded, skipping extraction
	I1027 18:56:55.237866  357750 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 18:56:55.265648  357750 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 18:56:55.265671  357750 cache_images.go:85] Images are preloaded, skipping loading
	I1027 18:56:55.265680  357750 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1027 18:56:55.265769  357750 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-589824 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-589824 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 18:56:55.265839  357750 ssh_runner.go:195] Run: crio config
	I1027 18:56:55.315863  357750 cni.go:84] Creating CNI manager for ""
	I1027 18:56:55.315894  357750 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 18:56:55.315923  357750 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 18:56:55.315955  357750 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-589824 NodeName:addons-589824 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 18:56:55.316131  357750 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-589824"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 18:56:55.316251  357750 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 18:56:55.325119  357750 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 18:56:55.325209  357750 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 18:56:55.333671  357750 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1027 18:56:55.347688  357750 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 18:56:55.365313  357750 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1027 18:56:55.379065  357750 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1027 18:56:55.383214  357750 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 18:56:55.393678  357750 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 18:56:55.475622  357750 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 18:56:55.500012  357750 certs.go:69] Setting up /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824 for IP: 192.168.49.2
	I1027 18:56:55.500050  357750 certs.go:195] generating shared ca certs ...
	I1027 18:56:55.500071  357750 certs.go:227] acquiring lock for ca certs: {Name:mk4bdbca32068f6f817fc35fdc496e961dc3e0d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:55.500243  357750 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21801-352833/.minikube/ca.key
	I1027 18:56:55.715980  357750 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt ...
	I1027 18:56:55.716019  357750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt: {Name:mk44f63d199fa400a2827298fa03b78f2ed37f0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:55.716256  357750 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-352833/.minikube/ca.key ...
	I1027 18:56:55.716276  357750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/ca.key: {Name:mk77897f052d08f6c3cf1811127f99888464704d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:55.716368  357750 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.key
	I1027 18:56:55.825508  357750 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.crt ...
	I1027 18:56:55.825543  357750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.crt: {Name:mkccbad3f1bcadbd55a94e0cd6d1d1c31beab8ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:55.825726  357750 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.key ...
	I1027 18:56:55.825738  357750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.key: {Name:mk02f870bfeb39e7048e30d37d8283191317e991 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:55.825805  357750 certs.go:257] generating profile certs ...
	I1027 18:56:55.825868  357750 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/client.key
	I1027 18:56:55.825882  357750 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/client.crt with IP's: []
	I1027 18:56:55.977322  357750 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/client.crt ...
	I1027 18:56:55.977358  357750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/client.crt: {Name:mk11bcab359d1a2cac5f29bcc03417bf021ca8fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:55.977541  357750 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/client.key ...
	I1027 18:56:55.977553  357750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/client.key: {Name:mkc8659cd46457b56bd99c551ba501ba5e96a71c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:55.977625  357750 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/apiserver.key.750c5106
	I1027 18:56:55.977644  357750 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/apiserver.crt.750c5106 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1027 18:56:56.079289  357750 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/apiserver.crt.750c5106 ...
	I1027 18:56:56.079323  357750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/apiserver.crt.750c5106: {Name:mk38b7d109dac7bba4e8ea89f6c34772ad93a1c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:56.079494  357750 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/apiserver.key.750c5106 ...
	I1027 18:56:56.079510  357750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/apiserver.key.750c5106: {Name:mk70a14b973b8c7b46c2933f10da41c1a6cbb51e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:56.079584  357750 certs.go:382] copying /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/apiserver.crt.750c5106 -> /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/apiserver.crt
	I1027 18:56:56.079680  357750 certs.go:386] copying /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/apiserver.key.750c5106 -> /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/apiserver.key
	I1027 18:56:56.079729  357750 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/proxy-client.key
	I1027 18:56:56.079748  357750 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/proxy-client.crt with IP's: []
	I1027 18:56:56.389885  357750 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/proxy-client.crt ...
	I1027 18:56:56.389923  357750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/proxy-client.crt: {Name:mka57fa39da97889933f822557c0bf7e18955f0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:56.390114  357750 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/proxy-client.key ...
	I1027 18:56:56.390130  357750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/proxy-client.key: {Name:mke2e38d668075c4ade04ae6e6ee0f95aced8745 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:56.390338  357750 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 18:56:56.390375  357750 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem (1078 bytes)
	I1027 18:56:56.390402  357750 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem (1123 bytes)
	I1027 18:56:56.390428  357750 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem (1679 bytes)
	I1027 18:56:56.391043  357750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 18:56:56.410858  357750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 18:56:56.429623  357750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 18:56:56.448487  357750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1027 18:56:56.467164  357750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1027 18:56:56.486472  357750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 18:56:56.505284  357750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 18:56:56.524365  357750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 18:56:56.543324  357750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 18:56:56.564260  357750 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 18:56:56.578035  357750 ssh_runner.go:195] Run: openssl version
	I1027 18:56:56.584770  357750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 18:56:56.596606  357750 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 18:56:56.600637  357750 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1027 18:56:56.600717  357750 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 18:56:56.634788  357750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 18:56:56.644348  357750 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 18:56:56.648324  357750 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 18:56:56.648380  357750 kubeadm.go:400] StartCluster: {Name:addons-589824 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-589824 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 18:56:56.648446  357750 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 18:56:56.648509  357750 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 18:56:56.677863  357750 cri.go:89] found id: ""
	I1027 18:56:56.677976  357750 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 18:56:56.686751  357750 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 18:56:56.695694  357750 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1027 18:56:56.695757  357750 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 18:56:56.704372  357750 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 18:56:56.704402  357750 kubeadm.go:157] found existing configuration files:
	
	I1027 18:56:56.704453  357750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 18:56:56.712983  357750 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 18:56:56.713048  357750 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 18:56:56.721724  357750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 18:56:56.730011  357750 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 18:56:56.730077  357750 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 18:56:56.738490  357750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 18:56:56.747048  357750 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 18:56:56.747104  357750 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 18:56:56.755384  357750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 18:56:56.763784  357750 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 18:56:56.763835  357750 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 18:56:56.771819  357750 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 18:56:56.811666  357750 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1027 18:56:56.811750  357750 kubeadm.go:318] [preflight] Running pre-flight checks
	I1027 18:56:56.833868  357750 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1027 18:56:56.833957  357750 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1027 18:56:56.834009  357750 kubeadm.go:318] OS: Linux
	I1027 18:56:56.834103  357750 kubeadm.go:318] CGROUPS_CPU: enabled
	I1027 18:56:56.834193  357750 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1027 18:56:56.834250  357750 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1027 18:56:56.834327  357750 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1027 18:56:56.834398  357750 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1027 18:56:56.834473  357750 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1027 18:56:56.834524  357750 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1027 18:56:56.834560  357750 kubeadm.go:318] CGROUPS_IO: enabled
	I1027 18:56:56.908261  357750 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 18:56:56.908413  357750 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 18:56:56.908569  357750 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 18:56:56.917843  357750 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 18:56:56.921762  357750 out.go:252]   - Generating certificates and keys ...
	I1027 18:56:56.921898  357750 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1027 18:56:56.922001  357750 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1027 18:56:57.231482  357750 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 18:56:57.386011  357750 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1027 18:56:57.669283  357750 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1027 18:56:57.820597  357750 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1027 18:56:58.074441  357750 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1027 18:56:58.074598  357750 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-589824 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1027 18:56:58.183627  357750 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1027 18:56:58.183838  357750 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-589824 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1027 18:56:58.753158  357750 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 18:56:59.117691  357750 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 18:56:59.312307  357750 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1027 18:56:59.312393  357750 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 18:56:59.809792  357750 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 18:57:00.239622  357750 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 18:57:00.446767  357750 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 18:57:00.597313  357750 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 18:57:00.790239  357750 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 18:57:00.790695  357750 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 18:57:00.794923  357750 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 18:57:00.796525  357750 out.go:252]   - Booting up control plane ...
	I1027 18:57:00.796633  357750 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 18:57:00.796764  357750 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 18:57:00.797320  357750 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 18:57:00.811724  357750 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 18:57:00.811902  357750 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 18:57:00.820281  357750 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 18:57:00.820428  357750 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 18:57:00.820494  357750 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1027 18:57:00.923742  357750 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 18:57:00.923919  357750 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 18:57:01.425534  357750 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.935238ms
	I1027 18:57:01.429591  357750 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 18:57:01.429755  357750 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1027 18:57:01.429895  357750 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 18:57:01.430018  357750 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 18:57:02.938314  357750 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.508653425s
	I1027 18:57:03.884339  357750 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.451954649s
	I1027 18:57:05.933058  357750 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.503402444s
	I1027 18:57:05.947434  357750 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 18:57:05.964173  357750 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 18:57:05.976615  357750 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 18:57:05.976918  357750 kubeadm.go:318] [mark-control-plane] Marking the node addons-589824 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 18:57:05.986438  357750 kubeadm.go:318] [bootstrap-token] Using token: ll4eiv.hma7u1nr1623ia8e
	I1027 18:57:05.987933  357750 out.go:252]   - Configuring RBAC rules ...
	I1027 18:57:05.988086  357750 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 18:57:05.992476  357750 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 18:57:05.999042  357750 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 18:57:06.002113  357750 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 18:57:06.006323  357750 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 18:57:06.009518  357750 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 18:57:06.339713  357750 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 18:57:06.760751  357750 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1027 18:57:07.338899  357750 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1027 18:57:07.339766  357750 kubeadm.go:318] 
	I1027 18:57:07.339863  357750 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1027 18:57:07.339875  357750 kubeadm.go:318] 
	I1027 18:57:07.339991  357750 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1027 18:57:07.340015  357750 kubeadm.go:318] 
	I1027 18:57:07.340072  357750 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1027 18:57:07.340198  357750 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 18:57:07.340265  357750 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 18:57:07.340272  357750 kubeadm.go:318] 
	I1027 18:57:07.340339  357750 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1027 18:57:07.340345  357750 kubeadm.go:318] 
	I1027 18:57:07.340390  357750 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 18:57:07.340396  357750 kubeadm.go:318] 
	I1027 18:57:07.340439  357750 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1027 18:57:07.340505  357750 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 18:57:07.340564  357750 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 18:57:07.340570  357750 kubeadm.go:318] 
	I1027 18:57:07.340648  357750 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 18:57:07.340717  357750 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1027 18:57:07.340723  357750 kubeadm.go:318] 
	I1027 18:57:07.340793  357750 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token ll4eiv.hma7u1nr1623ia8e \
	I1027 18:57:07.340884  357750 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ab29e81999671591f366788f5ae9ffb132789ebc71f7c0efdaecd38575a5ab6a \
	I1027 18:57:07.340904  357750 kubeadm.go:318] 	--control-plane 
	I1027 18:57:07.340922  357750 kubeadm.go:318] 
	I1027 18:57:07.341025  357750 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1027 18:57:07.341035  357750 kubeadm.go:318] 
	I1027 18:57:07.341142  357750 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token ll4eiv.hma7u1nr1623ia8e \
	I1027 18:57:07.341276  357750 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ab29e81999671591f366788f5ae9ffb132789ebc71f7c0efdaecd38575a5ab6a 
	I1027 18:57:07.343780  357750 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1027 18:57:07.343917  357750 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 18:57:07.343952  357750 cni.go:84] Creating CNI manager for ""
	I1027 18:57:07.343965  357750 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 18:57:07.346085  357750 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1027 18:57:07.347565  357750 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 18:57:07.352448  357750 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 18:57:07.352468  357750 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1027 18:57:07.366292  357750 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 18:57:07.575829  357750 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 18:57:07.575906  357750 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:07.575924  357750 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-589824 minikube.k8s.io/updated_at=2025_10_27T18_57_07_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f minikube.k8s.io/name=addons-589824 minikube.k8s.io/primary=true
	I1027 18:57:07.665657  357750 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:07.665674  357750 ops.go:34] apiserver oom_adj: -16
	I1027 18:57:08.165792  357750 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:08.666386  357750 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:09.165850  357750 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:09.666299  357750 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:10.166632  357750 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:10.666391  357750 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:11.166719  357750 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:11.666054  357750 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:12.166540  357750 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:12.232521  357750 kubeadm.go:1113] duration metric: took 4.656676265s to wait for elevateKubeSystemPrivileges
	I1027 18:57:12.232546  357750 kubeadm.go:402] duration metric: took 15.584173488s to StartCluster
	I1027 18:57:12.232563  357750 settings.go:142] acquiring lock: {Name:mk8304c2106bf310642e0949fc0266ccb50f2f95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:57:12.232689  357750 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 18:57:12.233238  357750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/kubeconfig: {Name:mk24cbe512a6907c874f3fb7a85390a8f9fd2b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:57:12.233491  357750 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 18:57:12.233507  357750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 18:57:12.233597  357750 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1027 18:57:12.233710  357750 config.go:182] Loaded profile config "addons-589824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:57:12.233763  357750 addons.go:69] Setting default-storageclass=true in profile "addons-589824"
	I1027 18:57:12.233774  357750 addons.go:69] Setting gcp-auth=true in profile "addons-589824"
	I1027 18:57:12.233776  357750 addons.go:69] Setting yakd=true in profile "addons-589824"
	I1027 18:57:12.233786  357750 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-589824"
	I1027 18:57:12.233792  357750 mustload.go:65] Loading cluster: addons-589824
	I1027 18:57:12.233797  357750 addons.go:238] Setting addon yakd=true in "addons-589824"
	I1027 18:57:12.233814  357750 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-589824"
	I1027 18:57:12.233843  357750 addons.go:69] Setting ingress-dns=true in profile "addons-589824"
	I1027 18:57:12.233834  357750 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-589824"
	I1027 18:57:12.233866  357750 addons.go:69] Setting inspektor-gadget=true in profile "addons-589824"
	I1027 18:57:12.233881  357750 addons.go:238] Setting addon inspektor-gadget=true in "addons-589824"
	I1027 18:57:12.233898  357750 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-589824"
	I1027 18:57:12.233908  357750 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:57:12.233933  357750 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-589824"
	I1027 18:57:12.233959  357750 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:57:12.233973  357750 config.go:182] Loaded profile config "addons-589824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:57:12.233981  357750 addons.go:69] Setting cloud-spanner=true in profile "addons-589824"
	I1027 18:57:12.233997  357750 addons.go:238] Setting addon cloud-spanner=true in "addons-589824"
	I1027 18:57:12.234022  357750 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:57:12.234240  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:57:12.234281  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:57:12.234409  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:57:12.234487  357750 addons.go:69] Setting registry-creds=true in profile "addons-589824"
	I1027 18:57:12.234511  357750 addons.go:238] Setting addon registry-creds=true in "addons-589824"
	I1027 18:57:12.234546  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:57:12.234562  357750 addons.go:69] Setting metrics-server=true in profile "addons-589824"
	I1027 18:57:12.234579  357750 addons.go:69] Setting registry=true in profile "addons-589824"
	I1027 18:57:12.234593  357750 addons.go:238] Setting addon registry=true in "addons-589824"
	I1027 18:57:12.234603  357750 addons.go:238] Setting addon metrics-server=true in "addons-589824"
	I1027 18:57:12.234613  357750 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:57:12.234623  357750 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:57:12.234743  357750 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-589824"
	I1027 18:57:12.234750  357750 addons.go:69] Setting volcano=true in profile "addons-589824"
	I1027 18:57:12.234779  357750 addons.go:69] Setting storage-provisioner=true in profile "addons-589824"
	I1027 18:57:12.234798  357750 addons.go:238] Setting addon storage-provisioner=true in "addons-589824"
	I1027 18:57:12.234827  357750 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:57:12.234850  357750 addons.go:238] Setting addon volcano=true in "addons-589824"
	I1027 18:57:12.234909  357750 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:57:12.235098  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:57:12.235105  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:57:12.234770  357750 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-589824"
	I1027 18:57:12.233833  357750 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:57:12.236816  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:57:12.233976  357750 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:57:12.234551  357750 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:57:12.237479  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:57:12.237615  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:57:12.234565  357750 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-589824"
	I1027 18:57:12.237701  357750 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-589824"
	I1027 18:57:12.237743  357750 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:57:12.237790  357750 addons.go:69] Setting volumesnapshots=true in profile "addons-589824"
	I1027 18:57:12.237813  357750 addons.go:238] Setting addon volumesnapshots=true in "addons-589824"
	I1027 18:57:12.237842  357750 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:57:12.234546  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:57:12.233859  357750 addons.go:238] Setting addon ingress-dns=true in "addons-589824"
	I1027 18:57:12.238086  357750 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:57:12.238299  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:57:12.238417  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:57:12.239853  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:57:12.241412  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:57:12.242559  357750 out.go:179] * Verifying Kubernetes components...
	I1027 18:57:12.233767  357750 addons.go:69] Setting ingress=true in profile "addons-589824"
	I1027 18:57:12.243395  357750 addons.go:238] Setting addon ingress=true in "addons-589824"
	I1027 18:57:12.243478  357750 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:57:12.244500  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:57:12.248856  357750 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 18:57:12.258320  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:57:12.262972  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:57:12.283314  357750 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:57:12.295619  357750 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1027 18:57:12.296422  357750 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1027 18:57:12.297005  357750 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1027 18:57:12.297025  357750 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1027 18:57:12.297104  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:57:12.297692  357750 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1027 18:57:12.297709  357750 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1027 18:57:12.297808  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:57:12.298613  357750 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1027 18:57:12.301858  357750 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1027 18:57:12.304476  357750 addons.go:238] Setting addon default-storageclass=true in "addons-589824"
	I1027 18:57:12.304537  357750 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:57:12.305127  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	W1027 18:57:12.305446  357750 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1027 18:57:12.306042  357750 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1027 18:57:12.306236  357750 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1027 18:57:12.306260  357750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1027 18:57:12.306331  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:57:12.308318  357750 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1027 18:57:12.308393  357750 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1027 18:57:12.309943  357750 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1027 18:57:12.309966  357750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1027 18:57:12.310035  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:57:12.311441  357750 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1027 18:57:12.314358  357750 out.go:179]   - Using image docker.io/registry:3.0.0
	I1027 18:57:12.315960  357750 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1027 18:57:12.320880  357750 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1027 18:57:12.320952  357750 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1027 18:57:12.325327  357750 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1027 18:57:12.325354  357750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1027 18:57:12.325426  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:57:12.330206  357750 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1027 18:57:12.334567  357750 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1027 18:57:12.337443  357750 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1027 18:57:12.339740  357750 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-589824"
	I1027 18:57:12.339796  357750 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:57:12.340297  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:57:12.342051  357750 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1027 18:57:12.342074  357750 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1027 18:57:12.342151  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:57:12.343584  357750 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1027 18:57:12.343603  357750 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1027 18:57:12.343667  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:57:12.344506  357750 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1027 18:57:12.346025  357750 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1027 18:57:12.346042  357750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1027 18:57:12.346101  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:57:12.348080  357750 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1027 18:57:12.349632  357750 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1027 18:57:12.351863  357750 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1027 18:57:12.353323  357750 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1027 18:57:12.353354  357750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1027 18:57:12.353431  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:57:12.361798  357750 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1027 18:57:12.366319  357750 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1027 18:57:12.369247  357750 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1027 18:57:12.369286  357750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1027 18:57:12.369365  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:57:12.371323  357750 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1027 18:57:12.371352  357750 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1027 18:57:12.371436  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:57:12.385506  357750 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 18:57:12.387925  357750 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 18:57:12.387953  357750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 18:57:12.388026  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:57:12.395178  357750 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1027 18:57:12.396344  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:57:12.396523  357750 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1027 18:57:12.396543  357750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1027 18:57:12.396620  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:57:12.403855  357750 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 18:57:12.404464  357750 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 18:57:12.404626  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:57:12.407940  357750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 18:57:12.410552  357750 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1027 18:57:12.411659  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:57:12.413951  357750 out.go:179]   - Using image docker.io/busybox:stable
	I1027 18:57:12.415427  357750 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1027 18:57:12.415445  357750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1027 18:57:12.415525  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:57:12.424827  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:57:12.425472  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:57:12.426453  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:57:12.427157  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:57:12.427824  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:57:12.430254  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:57:12.433151  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:57:12.435341  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:57:12.437790  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	W1027 18:57:12.446277  357750 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1027 18:57:12.446458  357750 retry.go:31] will retry after 316.324147ms: ssh: handshake failed: EOF
	I1027 18:57:12.446378  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:57:12.457231  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	W1027 18:57:12.463308  357750 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1027 18:57:12.463404  357750 retry.go:31] will retry after 233.328096ms: ssh: handshake failed: EOF
	I1027 18:57:12.467761  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:57:12.473735  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	W1027 18:57:12.477829  357750 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1027 18:57:12.477858  357750 retry.go:31] will retry after 200.746442ms: ssh: handshake failed: EOF
	I1027 18:57:12.521921  357750 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 18:57:12.576307  357750 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1027 18:57:12.576343  357750 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1027 18:57:12.576638  357750 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1027 18:57:12.576661  357750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1027 18:57:12.590988  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1027 18:57:12.596742  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 18:57:12.598902  357750 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1027 18:57:12.598929  357750 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1027 18:57:12.600733  357750 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1027 18:57:12.600758  357750 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1027 18:57:12.605339  357750 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1027 18:57:12.605432  357750 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1027 18:57:12.632114  357750 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1027 18:57:12.632154  357750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1027 18:57:12.635358  357750 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:12.635386  357750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1027 18:57:12.638751  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1027 18:57:12.640759  357750 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1027 18:57:12.640783  357750 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1027 18:57:12.644031  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1027 18:57:12.645007  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1027 18:57:12.649734  357750 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1027 18:57:12.649760  357750 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1027 18:57:12.663467  357750 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1027 18:57:12.663502  357750 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1027 18:57:12.666781  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1027 18:57:12.672294  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:12.672940  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1027 18:57:12.675362  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1027 18:57:12.688262  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1027 18:57:12.700648  357750 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1027 18:57:12.700694  357750 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1027 18:57:12.702874  357750 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1027 18:57:12.702966  357750 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1027 18:57:12.740693  357750 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1027 18:57:12.740724  357750 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1027 18:57:12.758058  357750 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1027 18:57:12.758154  357750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1027 18:57:12.776771  357750 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1027 18:57:12.776887  357750 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1027 18:57:12.797989  357750 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1027 18:57:12.800179  357750 node_ready.go:35] waiting up to 6m0s for node "addons-589824" to be "Ready" ...
	I1027 18:57:12.806349  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1027 18:57:12.826392  357750 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1027 18:57:12.826426  357750 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1027 18:57:12.889726  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1027 18:57:12.913075  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 18:57:12.922814  357750 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1027 18:57:12.922841  357750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1027 18:57:12.982744  357750 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1027 18:57:12.982775  357750 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1027 18:57:13.017618  357750 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1027 18:57:13.017647  357750 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1027 18:57:13.043806  357750 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1027 18:57:13.043839  357750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1027 18:57:13.084632  357750 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1027 18:57:13.084725  357750 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1027 18:57:13.124708  357750 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1027 18:57:13.124748  357750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1027 18:57:13.163085  357750 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1027 18:57:13.163122  357750 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1027 18:57:13.200594  357750 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1027 18:57:13.200636  357750 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1027 18:57:13.229807  357750 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1027 18:57:13.229837  357750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1027 18:57:13.255243  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1027 18:57:13.284691  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1027 18:57:13.303994  357750 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-589824" context rescaled to 1 replicas
	I1027 18:57:13.621741  357750 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.024955931s)
	I1027 18:57:13.623669  357750 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.032623239s)
	I1027 18:57:13.885197  357750 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.218373268s)
	I1027 18:57:13.885242  357750 addons.go:479] Verifying addon ingress=true in "addons-589824"
	I1027 18:57:13.885307  357750 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.212971719s)
	I1027 18:57:13.885378  357750 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.209993765s)
	I1027 18:57:13.885411  357750 addons.go:479] Verifying addon registry=true in "addons-589824"
	I1027 18:57:13.885490  357750 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.197192096s)
	I1027 18:57:13.885520  357750 addons.go:479] Verifying addon metrics-server=true in "addons-589824"
	I1027 18:57:13.885346  357750 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.212373152s)
	W1027 18:57:13.885347  357750 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:13.885633  357750 retry.go:31] will retry after 305.173504ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:13.887770  357750 out.go:179] * Verifying ingress addon...
	I1027 18:57:13.887804  357750 out.go:179] * Verifying registry addon...
	I1027 18:57:13.889787  357750 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1027 18:57:13.889822  357750 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1027 18:57:13.892795  357750 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1027 18:57:13.892904  357750 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1027 18:57:13.892923  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:14.191837  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:14.317794  357750 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.511395496s)
	W1027 18:57:14.317847  357750 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1027 18:57:14.317875  357750 retry.go:31] will retry after 180.995068ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1027 18:57:14.317905  357750 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.428148396s)
	I1027 18:57:14.317987  357750 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.404881701s)
	I1027 18:57:14.318406  357750 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.063111192s)
	I1027 18:57:14.318438  357750 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-589824"
	I1027 18:57:14.318765  357750 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.034028321s)
	I1027 18:57:14.320352  357750 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-589824 service yakd-dashboard -n yakd-dashboard
	
	I1027 18:57:14.320445  357750 out.go:179] * Verifying csi-hostpath-driver addon...
	I1027 18:57:14.322990  357750 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1027 18:57:14.330398  357750 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1027 18:57:14.330517  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1027 18:57:14.335248  357750 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class csi-hostpath-sc as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "csi-hostpath-sc": the object has been modified; please apply your changes to the latest version and try again]
	I1027 18:57:14.394203  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:14.394306  357750 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1027 18:57:14.394325  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:14.499596  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1027 18:57:14.803404  357750 node_ready.go:57] node "addons-589824" has "Ready":"False" status (will retry)
	I1027 18:57:14.826980  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1027 18:57:14.865618  357750 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:14.865656  357750 retry.go:31] will retry after 211.067145ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:14.893599  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:14.893810  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:15.077024  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:15.326361  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:15.392689  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:15.392756  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:15.827405  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:15.893589  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:15.893772  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:16.326327  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:16.393309  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:16.393309  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1027 18:57:16.803936  357750 node_ready.go:57] node "addons-589824" has "Ready":"False" status (will retry)
	I1027 18:57:16.826536  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:16.927095  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:16.927407  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:17.036467  357750 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.53680921s)
	I1027 18:57:17.036563  357750 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.959497191s)
	W1027 18:57:17.036606  357750 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:17.036635  357750 retry.go:31] will retry after 790.979447ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:17.327341  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:17.428402  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:17.428452  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:17.827179  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:17.828200  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:17.892888  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:17.893075  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:18.327499  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:18.392768  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:18.392848  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1027 18:57:18.393435  357750 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:18.393461  357750 retry.go:31] will retry after 991.470073ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:18.826722  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:18.893711  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:18.893979  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1027 18:57:19.302903  357750 node_ready.go:57] node "addons-589824" has "Ready":"False" status (will retry)
	I1027 18:57:19.328526  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:19.385611  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:19.392910  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:19.392984  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:19.827452  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:19.893010  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:19.893077  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:19.902429  357750 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1027 18:57:19.902517  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:57:19.923575  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	W1027 18:57:19.956481  357750 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:19.956525  357750 retry.go:31] will retry after 1.650834557s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:20.032158  357750 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1027 18:57:20.045081  357750 addons.go:238] Setting addon gcp-auth=true in "addons-589824"
	I1027 18:57:20.045151  357750 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:57:20.045672  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:57:20.064402  357750 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1027 18:57:20.064462  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:57:20.083148  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:57:20.183714  357750 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1027 18:57:20.185108  357750 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1027 18:57:20.186324  357750 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1027 18:57:20.186342  357750 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1027 18:57:20.201032  357750 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1027 18:57:20.201072  357750 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1027 18:57:20.215117  357750 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1027 18:57:20.215159  357750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1027 18:57:20.229544  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1027 18:57:20.327531  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:20.393454  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:20.393530  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:20.553465  357750 addons.go:479] Verifying addon gcp-auth=true in "addons-589824"
	I1027 18:57:20.554686  357750 out.go:179] * Verifying gcp-auth addon...
	I1027 18:57:20.557676  357750 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1027 18:57:20.560786  357750 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1027 18:57:20.560810  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:20.826368  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:20.893436  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:20.893596  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:21.061467  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:57:21.303672  357750 node_ready.go:57] node "addons-589824" has "Ready":"False" status (will retry)
	I1027 18:57:21.326753  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:21.394089  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:21.394216  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:21.561856  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:21.607899  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:21.826862  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:21.893731  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:21.893762  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:22.061682  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:57:22.172966  357750 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:22.173003  357750 retry.go:31] will retry after 1.702668474s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:22.326642  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:22.393584  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:22.393743  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:22.560728  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:22.826515  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:22.893513  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:22.893694  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:23.060890  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:57:23.304062  357750 node_ready.go:57] node "addons-589824" has "Ready":"False" status (will retry)
	I1027 18:57:23.326126  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:23.393246  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:23.393250  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:23.561309  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:23.826902  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:23.875927  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:23.893829  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:23.893849  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:24.060909  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:24.326466  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:24.393536  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:24.393674  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1027 18:57:24.445779  357750 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:24.445813  357750 retry.go:31] will retry after 2.853721544s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:24.560702  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:24.826571  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:24.893347  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:24.893546  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:25.061595  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:25.326667  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:25.393652  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:25.393734  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:25.561947  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:57:25.803972  357750 node_ready.go:57] node "addons-589824" has "Ready":"False" status (will retry)
	I1027 18:57:25.827550  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:25.893485  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:25.893722  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:26.061575  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:26.326734  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:26.393831  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:26.393991  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:26.560565  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:26.826927  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:26.892798  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:26.892850  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:27.060764  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:27.299955  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:27.327189  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:27.393319  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:27.393465  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:27.562907  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:27.827155  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1027 18:57:27.866554  357750 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:27.866598  357750 retry.go:31] will retry after 2.412375323s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:27.893638  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:27.893749  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:28.060887  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:57:28.303812  357750 node_ready.go:57] node "addons-589824" has "Ready":"False" status (will retry)
	I1027 18:57:28.326548  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:28.393454  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:28.393704  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:28.561545  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:28.826479  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:28.893365  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:28.893575  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:29.061703  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:29.326771  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:29.393653  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:29.393905  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:29.561389  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:29.827004  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:29.892783  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:29.892856  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:30.060518  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:30.279801  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:30.326039  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:30.392721  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:30.392887  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:30.560856  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:57:30.803556  357750 node_ready.go:57] node "addons-589824" has "Ready":"False" status (will retry)
	I1027 18:57:30.826615  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1027 18:57:30.844100  357750 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:30.844150  357750 retry.go:31] will retry after 8.393736916s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:30.893225  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:30.893271  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:31.061047  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:31.326257  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:31.393307  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:31.393374  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:31.561284  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:31.827160  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:31.893436  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:31.893493  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:32.061229  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:32.326355  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:32.393316  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:32.393391  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:32.561274  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:32.826972  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:32.893188  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:32.893374  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:33.061419  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:57:33.303322  357750 node_ready.go:57] node "addons-589824" has "Ready":"False" status (will retry)
	I1027 18:57:33.326191  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:33.392934  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:33.393147  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:33.560930  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:33.826182  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:33.893115  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:33.893226  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:34.061254  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:34.326189  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:34.393082  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:34.393392  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:34.561353  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:34.827061  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:34.892955  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:34.893085  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:35.061100  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:57:35.304240  357750 node_ready.go:57] node "addons-589824" has "Ready":"False" status (will retry)
	I1027 18:57:35.326369  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:35.393356  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:35.393426  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:35.562231  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:35.826644  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:35.893797  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:35.894043  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:36.060795  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:36.326929  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:36.392898  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:36.393144  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:36.561235  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:36.826022  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:36.893061  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:36.893128  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:37.061055  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:37.326853  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:37.393983  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:37.394046  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:37.561462  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:57:37.803225  357750 node_ready.go:57] node "addons-589824" has "Ready":"False" status (will retry)
	I1027 18:57:37.826737  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:37.893658  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:37.893810  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:38.061150  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:38.326997  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:38.392750  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:38.392796  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:38.560708  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:38.826707  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:38.894646  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:38.894780  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:39.061504  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:39.238827  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:39.326452  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:39.393669  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:39.393882  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:39.560996  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:57:39.807425  357750 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:39.807473  357750 retry.go:31] will retry after 9.722408552s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:39.826295  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:39.893344  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:39.893480  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:40.061449  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:57:40.303453  357750 node_ready.go:57] node "addons-589824" has "Ready":"False" status (will retry)
	I1027 18:57:40.326300  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:40.393478  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:40.393741  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:40.561811  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:40.826790  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:40.893346  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:40.893388  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:41.061562  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:41.326553  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:41.393796  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:41.394004  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:41.560905  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:41.826556  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:41.893471  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:41.893629  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:42.061579  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:57:42.303797  357750 node_ready.go:57] node "addons-589824" has "Ready":"False" status (will retry)
	I1027 18:57:42.327007  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:42.392881  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:42.393023  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:42.561003  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:42.826815  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:42.894128  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:42.894269  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:43.061503  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:43.326703  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:43.393565  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:43.393683  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:43.561479  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:43.826545  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:43.893460  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:43.893533  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:44.061609  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:44.326486  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:44.393716  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:44.393993  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:44.560700  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:57:44.803580  357750 node_ready.go:57] node "addons-589824" has "Ready":"False" status (will retry)
	I1027 18:57:44.826346  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:44.893672  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:44.894589  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:45.060806  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:45.326102  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:45.392980  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:45.393145  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:45.561882  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:45.826109  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:45.892939  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:45.893187  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:46.060900  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:46.326947  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:46.393060  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:46.393331  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:46.561441  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:57:46.803697  357750 node_ready.go:57] node "addons-589824" has "Ready":"False" status (will retry)
	I1027 18:57:46.826700  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:46.893855  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:46.893926  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:47.061110  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:47.326658  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:47.393687  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:47.393813  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:47.560709  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:47.827378  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:47.893454  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:47.893637  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:48.060810  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:48.326715  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:48.394046  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:48.394252  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:48.561244  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:48.826349  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:48.893295  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:48.893360  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:49.061046  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:57:49.303198  357750 node_ready.go:57] node "addons-589824" has "Ready":"False" status (will retry)
	I1027 18:57:49.325946  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:49.393199  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:49.393343  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:49.530578  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:49.560948  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:49.827612  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:49.893879  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:49.893948  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:50.061420  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:57:50.094428  357750 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:50.094465  357750 retry.go:31] will retry after 8.260223514s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:50.326537  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:50.393505  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:50.393555  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:50.561501  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:50.826875  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:50.892896  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:50.893039  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:51.061225  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:57:51.303343  357750 node_ready.go:57] node "addons-589824" has "Ready":"False" status (will retry)
	I1027 18:57:51.326318  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:51.393233  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:51.393403  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:51.561550  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:51.826157  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:51.893017  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:51.893223  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:52.061032  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:52.326127  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:52.392908  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:52.393023  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:52.561376  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:52.825974  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:52.893076  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:52.893243  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:53.061431  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:53.303329  357750 node_ready.go:49] node "addons-589824" is "Ready"
	I1027 18:57:53.303372  357750 node_ready.go:38] duration metric: took 40.503152177s for node "addons-589824" to be "Ready" ...
	I1027 18:57:53.303396  357750 api_server.go:52] waiting for apiserver process to appear ...
	I1027 18:57:53.303472  357750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 18:57:53.319020  357750 api_server.go:72] duration metric: took 41.085489885s to wait for apiserver process to appear ...
	I1027 18:57:53.319050  357750 api_server.go:88] waiting for apiserver healthz status ...
	I1027 18:57:53.319082  357750 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1027 18:57:53.325074  357750 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1027 18:57:53.326191  357750 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1027 18:57:53.326211  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:53.326278  357750 api_server.go:141] control plane version: v1.34.1
	I1027 18:57:53.326307  357750 api_server.go:131] duration metric: took 7.249289ms to wait for apiserver health ...
	I1027 18:57:53.326322  357750 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 18:57:53.331066  357750 system_pods.go:59] 20 kube-system pods found
	I1027 18:57:53.331107  357750 system_pods.go:61] "amd-gpu-device-plugin-6nrwh" [5a9374bd-7f34-436b-aed2-97c869cd1032] Pending
	I1027 18:57:53.331121  357750 system_pods.go:61] "coredns-66bc5c9577-lz5j4" [fe4fbd50-09cd-482f-b62e-9b5926b57e54] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 18:57:53.331145  357750 system_pods.go:61] "csi-hostpath-attacher-0" [534becd1-bea4-43a8-8269-447c5ea9deb6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1027 18:57:53.331154  357750 system_pods.go:61] "csi-hostpath-resizer-0" [38610a93-addc-4526-b959-7aa8963d68e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1027 18:57:53.331160  357750 system_pods.go:61] "csi-hostpathplugin-jlszq" [3c831b0a-7336-491d-9c07-f8fb8692e0bf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1027 18:57:53.331168  357750 system_pods.go:61] "etcd-addons-589824" [60f2cd63-082d-4ce7-9c01-6d7f6be68d2d] Running
	I1027 18:57:53.331173  357750 system_pods.go:61] "kindnet-4rz7d" [6c4e893b-3105-4baa-a073-e2364d1724cb] Running
	I1027 18:57:53.331176  357750 system_pods.go:61] "kube-apiserver-addons-589824" [9637af46-c973-4a7e-ad3d-7d9685db10fd] Running
	I1027 18:57:53.331180  357750 system_pods.go:61] "kube-controller-manager-addons-589824" [0339aca8-8d04-47ae-8947-9f8e7d261bc3] Running
	I1027 18:57:53.331189  357750 system_pods.go:61] "kube-ingress-dns-minikube" [fb9d7bfe-33a0-427f-a31b-c37973e40580] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 18:57:53.331192  357750 system_pods.go:61] "kube-proxy-77bv8" [8cdca916-4b76-4778-9aca-fd1e93ae4ed3] Running
	I1027 18:57:53.331196  357750 system_pods.go:61] "kube-scheduler-addons-589824" [2812a900-927e-4bed-9f4c-5f69d59f14b2] Running
	I1027 18:57:53.331201  357750 system_pods.go:61] "metrics-server-85b7d694d7-6mqmx" [1a22ca13-4aaa-4ac6-b5ad-df2b9ba87dfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 18:57:53.331210  357750 system_pods.go:61] "nvidia-device-plugin-daemonset-5m5rl" [911fc5e9-aa0b-494e-8eff-0c513d2b6625] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1027 18:57:53.331218  357750 system_pods.go:61] "registry-6b586f9694-bvh6h" [3922e9b1-ef70-4fce-b650-f88d2755f9ab] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1027 18:57:53.331224  357750 system_pods.go:61] "registry-creds-764b6fb674-bmdlm" [a18b1d31-61dd-4c8e-864d-c77043f43d5c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 18:57:53.331236  357750 system_pods.go:61] "registry-proxy-62t66" [05d41077-cfc6-442d-baee-0103823e1b16] Pending
	I1027 18:57:53.331240  357750 system_pods.go:61] "snapshot-controller-7d9fbc56b8-jx9vc" [05d8492e-9dd2-485b-a457-dc9625bb6a31] Pending
	I1027 18:57:53.331245  357750 system_pods.go:61] "snapshot-controller-7d9fbc56b8-m2794" [fc10956f-4e9a-4732-aacf-d844aab7d64a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:57:53.331249  357750 system_pods.go:61] "storage-provisioner" [b33a6bd4-fbbc-4726-a6e9-0a5a03e9f7ad] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 18:57:53.331256  357750 system_pods.go:74] duration metric: took 4.922725ms to wait for pod list to return data ...
	I1027 18:57:53.331267  357750 default_sa.go:34] waiting for default service account to be created ...
	I1027 18:57:53.333529  357750 default_sa.go:45] found service account: "default"
	I1027 18:57:53.333552  357750 default_sa.go:55] duration metric: took 2.279416ms for default service account to be created ...
	I1027 18:57:53.333562  357750 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 18:57:53.336887  357750 system_pods.go:86] 20 kube-system pods found
	I1027 18:57:53.336915  357750 system_pods.go:89] "amd-gpu-device-plugin-6nrwh" [5a9374bd-7f34-436b-aed2-97c869cd1032] Pending
	I1027 18:57:53.336923  357750 system_pods.go:89] "coredns-66bc5c9577-lz5j4" [fe4fbd50-09cd-482f-b62e-9b5926b57e54] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 18:57:53.336928  357750 system_pods.go:89] "csi-hostpath-attacher-0" [534becd1-bea4-43a8-8269-447c5ea9deb6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1027 18:57:53.336935  357750 system_pods.go:89] "csi-hostpath-resizer-0" [38610a93-addc-4526-b959-7aa8963d68e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1027 18:57:53.336943  357750 system_pods.go:89] "csi-hostpathplugin-jlszq" [3c831b0a-7336-491d-9c07-f8fb8692e0bf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1027 18:57:53.336947  357750 system_pods.go:89] "etcd-addons-589824" [60f2cd63-082d-4ce7-9c01-6d7f6be68d2d] Running
	I1027 18:57:53.336951  357750 system_pods.go:89] "kindnet-4rz7d" [6c4e893b-3105-4baa-a073-e2364d1724cb] Running
	I1027 18:57:53.336955  357750 system_pods.go:89] "kube-apiserver-addons-589824" [9637af46-c973-4a7e-ad3d-7d9685db10fd] Running
	I1027 18:57:53.336958  357750 system_pods.go:89] "kube-controller-manager-addons-589824" [0339aca8-8d04-47ae-8947-9f8e7d261bc3] Running
	I1027 18:57:53.336963  357750 system_pods.go:89] "kube-ingress-dns-minikube" [fb9d7bfe-33a0-427f-a31b-c37973e40580] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 18:57:53.336973  357750 system_pods.go:89] "kube-proxy-77bv8" [8cdca916-4b76-4778-9aca-fd1e93ae4ed3] Running
	I1027 18:57:53.336978  357750 system_pods.go:89] "kube-scheduler-addons-589824" [2812a900-927e-4bed-9f4c-5f69d59f14b2] Running
	I1027 18:57:53.336982  357750 system_pods.go:89] "metrics-server-85b7d694d7-6mqmx" [1a22ca13-4aaa-4ac6-b5ad-df2b9ba87dfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 18:57:53.336990  357750 system_pods.go:89] "nvidia-device-plugin-daemonset-5m5rl" [911fc5e9-aa0b-494e-8eff-0c513d2b6625] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1027 18:57:53.336997  357750 system_pods.go:89] "registry-6b586f9694-bvh6h" [3922e9b1-ef70-4fce-b650-f88d2755f9ab] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1027 18:57:53.337007  357750 system_pods.go:89] "registry-creds-764b6fb674-bmdlm" [a18b1d31-61dd-4c8e-864d-c77043f43d5c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 18:57:53.337011  357750 system_pods.go:89] "registry-proxy-62t66" [05d41077-cfc6-442d-baee-0103823e1b16] Pending
	I1027 18:57:53.337017  357750 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jx9vc" [05d8492e-9dd2-485b-a457-dc9625bb6a31] Pending
	I1027 18:57:53.337022  357750 system_pods.go:89] "snapshot-controller-7d9fbc56b8-m2794" [fc10956f-4e9a-4732-aacf-d844aab7d64a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:57:53.337028  357750 system_pods.go:89] "storage-provisioner" [b33a6bd4-fbbc-4726-a6e9-0a5a03e9f7ad] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 18:57:53.337043  357750 retry.go:31] will retry after 277.164995ms: missing components: kube-dns
	I1027 18:57:53.393789  357750 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1027 18:57:53.393816  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:53.393838  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:53.564797  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:53.665287  357750 system_pods.go:86] 20 kube-system pods found
	I1027 18:57:53.665331  357750 system_pods.go:89] "amd-gpu-device-plugin-6nrwh" [5a9374bd-7f34-436b-aed2-97c869cd1032] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1027 18:57:53.665342  357750 system_pods.go:89] "coredns-66bc5c9577-lz5j4" [fe4fbd50-09cd-482f-b62e-9b5926b57e54] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 18:57:53.665353  357750 system_pods.go:89] "csi-hostpath-attacher-0" [534becd1-bea4-43a8-8269-447c5ea9deb6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1027 18:57:53.665364  357750 system_pods.go:89] "csi-hostpath-resizer-0" [38610a93-addc-4526-b959-7aa8963d68e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1027 18:57:53.665384  357750 system_pods.go:89] "csi-hostpathplugin-jlszq" [3c831b0a-7336-491d-9c07-f8fb8692e0bf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1027 18:57:53.665391  357750 system_pods.go:89] "etcd-addons-589824" [60f2cd63-082d-4ce7-9c01-6d7f6be68d2d] Running
	I1027 18:57:53.665397  357750 system_pods.go:89] "kindnet-4rz7d" [6c4e893b-3105-4baa-a073-e2364d1724cb] Running
	I1027 18:57:53.665402  357750 system_pods.go:89] "kube-apiserver-addons-589824" [9637af46-c973-4a7e-ad3d-7d9685db10fd] Running
	I1027 18:57:53.665408  357750 system_pods.go:89] "kube-controller-manager-addons-589824" [0339aca8-8d04-47ae-8947-9f8e7d261bc3] Running
	I1027 18:57:53.665416  357750 system_pods.go:89] "kube-ingress-dns-minikube" [fb9d7bfe-33a0-427f-a31b-c37973e40580] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 18:57:53.665420  357750 system_pods.go:89] "kube-proxy-77bv8" [8cdca916-4b76-4778-9aca-fd1e93ae4ed3] Running
	I1027 18:57:53.665427  357750 system_pods.go:89] "kube-scheduler-addons-589824" [2812a900-927e-4bed-9f4c-5f69d59f14b2] Running
	I1027 18:57:53.665445  357750 system_pods.go:89] "metrics-server-85b7d694d7-6mqmx" [1a22ca13-4aaa-4ac6-b5ad-df2b9ba87dfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 18:57:53.665457  357750 system_pods.go:89] "nvidia-device-plugin-daemonset-5m5rl" [911fc5e9-aa0b-494e-8eff-0c513d2b6625] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1027 18:57:53.665474  357750 system_pods.go:89] "registry-6b586f9694-bvh6h" [3922e9b1-ef70-4fce-b650-f88d2755f9ab] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1027 18:57:53.665482  357750 system_pods.go:89] "registry-creds-764b6fb674-bmdlm" [a18b1d31-61dd-4c8e-864d-c77043f43d5c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 18:57:53.665490  357750 system_pods.go:89] "registry-proxy-62t66" [05d41077-cfc6-442d-baee-0103823e1b16] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1027 18:57:53.665500  357750 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jx9vc" [05d8492e-9dd2-485b-a457-dc9625bb6a31] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:57:53.665509  357750 system_pods.go:89] "snapshot-controller-7d9fbc56b8-m2794" [fc10956f-4e9a-4732-aacf-d844aab7d64a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:57:53.665517  357750 system_pods.go:89] "storage-provisioner" [b33a6bd4-fbbc-4726-a6e9-0a5a03e9f7ad] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 18:57:53.665545  357750 retry.go:31] will retry after 352.458417ms: missing components: kube-dns
	I1027 18:57:53.827376  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:53.927472  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:53.927509  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:54.022625  357750 system_pods.go:86] 20 kube-system pods found
	I1027 18:57:54.022660  357750 system_pods.go:89] "amd-gpu-device-plugin-6nrwh" [5a9374bd-7f34-436b-aed2-97c869cd1032] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1027 18:57:54.022666  357750 system_pods.go:89] "coredns-66bc5c9577-lz5j4" [fe4fbd50-09cd-482f-b62e-9b5926b57e54] Running
	I1027 18:57:54.022674  357750 system_pods.go:89] "csi-hostpath-attacher-0" [534becd1-bea4-43a8-8269-447c5ea9deb6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1027 18:57:54.022679  357750 system_pods.go:89] "csi-hostpath-resizer-0" [38610a93-addc-4526-b959-7aa8963d68e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1027 18:57:54.022685  357750 system_pods.go:89] "csi-hostpathplugin-jlszq" [3c831b0a-7336-491d-9c07-f8fb8692e0bf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1027 18:57:54.022689  357750 system_pods.go:89] "etcd-addons-589824" [60f2cd63-082d-4ce7-9c01-6d7f6be68d2d] Running
	I1027 18:57:54.022695  357750 system_pods.go:89] "kindnet-4rz7d" [6c4e893b-3105-4baa-a073-e2364d1724cb] Running
	I1027 18:57:54.022699  357750 system_pods.go:89] "kube-apiserver-addons-589824" [9637af46-c973-4a7e-ad3d-7d9685db10fd] Running
	I1027 18:57:54.022704  357750 system_pods.go:89] "kube-controller-manager-addons-589824" [0339aca8-8d04-47ae-8947-9f8e7d261bc3] Running
	I1027 18:57:54.022710  357750 system_pods.go:89] "kube-ingress-dns-minikube" [fb9d7bfe-33a0-427f-a31b-c37973e40580] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 18:57:54.022713  357750 system_pods.go:89] "kube-proxy-77bv8" [8cdca916-4b76-4778-9aca-fd1e93ae4ed3] Running
	I1027 18:57:54.022717  357750 system_pods.go:89] "kube-scheduler-addons-589824" [2812a900-927e-4bed-9f4c-5f69d59f14b2] Running
	I1027 18:57:54.022721  357750 system_pods.go:89] "metrics-server-85b7d694d7-6mqmx" [1a22ca13-4aaa-4ac6-b5ad-df2b9ba87dfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 18:57:54.022728  357750 system_pods.go:89] "nvidia-device-plugin-daemonset-5m5rl" [911fc5e9-aa0b-494e-8eff-0c513d2b6625] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1027 18:57:54.022735  357750 system_pods.go:89] "registry-6b586f9694-bvh6h" [3922e9b1-ef70-4fce-b650-f88d2755f9ab] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1027 18:57:54.022740  357750 system_pods.go:89] "registry-creds-764b6fb674-bmdlm" [a18b1d31-61dd-4c8e-864d-c77043f43d5c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 18:57:54.022748  357750 system_pods.go:89] "registry-proxy-62t66" [05d41077-cfc6-442d-baee-0103823e1b16] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1027 18:57:54.022757  357750 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jx9vc" [05d8492e-9dd2-485b-a457-dc9625bb6a31] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:57:54.022762  357750 system_pods.go:89] "snapshot-controller-7d9fbc56b8-m2794" [fc10956f-4e9a-4732-aacf-d844aab7d64a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:57:54.022766  357750 system_pods.go:89] "storage-provisioner" [b33a6bd4-fbbc-4726-a6e9-0a5a03e9f7ad] Running
	I1027 18:57:54.022775  357750 system_pods.go:126] duration metric: took 689.206974ms to wait for k8s-apps to be running ...
	I1027 18:57:54.022786  357750 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 18:57:54.022835  357750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 18:57:54.037575  357750 system_svc.go:56] duration metric: took 14.777169ms WaitForService to wait for kubelet
	I1027 18:57:54.037605  357750 kubeadm.go:586] duration metric: took 41.804080273s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 18:57:54.037622  357750 node_conditions.go:102] verifying NodePressure condition ...
	I1027 18:57:54.040731  357750 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1027 18:57:54.040756  357750 node_conditions.go:123] node cpu capacity is 8
	I1027 18:57:54.040770  357750 node_conditions.go:105] duration metric: took 3.142389ms to run NodePressure ...
	I1027 18:57:54.040782  357750 start.go:241] waiting for startup goroutines ...
	I1027 18:57:54.061523  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:54.326511  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:54.393951  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:54.394529  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:54.562824  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:54.828011  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:54.894087  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:54.894290  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:55.061964  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:55.327848  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:55.394629  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:55.394661  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:55.562713  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:55.827930  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:55.894370  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:55.894415  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:56.061593  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:56.327208  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:56.393669  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:56.393695  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:56.561772  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:56.827707  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:56.893877  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:56.893912  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:57.061841  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:57.327645  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:57.393700  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:57.394279  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:57.561555  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:57.827874  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:57.894264  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:57.894295  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:58.062644  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:58.327716  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:58.355596  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:58.393350  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:58.393459  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:58.561366  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:58.827849  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:58.894049  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:58.894777  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1027 18:57:59.034377  357750 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:59.034418  357750 retry.go:31] will retry after 25.886247674s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:59.062004  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:59.327258  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:59.394058  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:59.395723  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:59.562770  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:59.827849  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:59.893988  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:59.894430  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:00.061769  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:00.327527  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:00.394077  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:00.394185  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:00.561702  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:00.827126  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:00.893687  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:00.893720  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:01.110300  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:01.327058  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:01.427216  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:01.427324  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:01.561216  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:01.826895  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:01.893876  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:01.894023  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:02.060887  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:02.326627  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:02.394329  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:02.394453  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:02.561681  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:02.827570  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:02.893462  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:02.893550  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:03.061901  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:03.327485  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:03.428457  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:03.428483  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:03.561375  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:03.827303  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:03.893459  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:03.893646  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:04.061603  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:04.327734  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:04.393954  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:04.394009  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:04.561635  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:04.827335  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:04.893364  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:04.893478  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:05.061610  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:05.327350  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:05.394183  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:05.394598  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:05.561900  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:05.829247  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:05.893272  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:05.893302  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:06.061803  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:06.327999  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:06.394268  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:06.394274  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:06.561641  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:06.862940  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:07.017039  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:07.017193  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:07.144404  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:07.350090  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:07.393262  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:07.393697  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:07.562802  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:07.827652  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:07.894427  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:07.894474  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:08.063531  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:08.327269  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:08.393200  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:08.393271  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:08.561225  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:08.826569  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:08.893504  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:08.893585  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:09.062347  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:09.327439  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:09.394281  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:09.394608  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:09.562376  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:09.827162  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:09.893356  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:09.893616  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:10.061252  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:10.326716  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:10.393947  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:10.394102  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:10.561294  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:10.900254  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:10.900278  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:10.900453  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:11.061790  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:11.327482  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:11.428581  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:11.428614  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:11.561556  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:11.827856  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:11.894177  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:11.894284  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:12.061717  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:12.328187  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:12.392898  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:12.393044  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:12.560794  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:12.827940  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:12.893731  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:12.893887  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:13.061605  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:13.328879  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:13.396257  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:13.397328  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:13.562112  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:13.828817  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:13.894968  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:13.896332  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:14.062439  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:14.327173  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:14.393781  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:14.394261  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:14.561955  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:14.827534  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:14.894316  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:14.894597  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:15.062060  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:15.326747  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:15.393953  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:15.394175  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:15.562165  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:15.827276  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:15.894219  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:15.894296  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:16.061366  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:16.326997  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:16.393416  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:16.393595  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:16.561817  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:16.826914  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:16.894158  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:16.894276  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:17.061699  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:17.327719  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:17.394604  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:17.395202  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:17.561384  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:17.827630  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:17.893839  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:17.893861  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:18.061685  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:18.327693  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:18.393810  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:18.393828  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:18.561152  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:18.826383  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:18.893663  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:18.893709  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:19.061877  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:19.327523  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:19.393460  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:19.393459  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:19.562444  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:19.828795  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:19.893478  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:19.893593  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:20.062107  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:20.327420  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:20.393109  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:20.393172  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:20.561598  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:20.827734  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:20.894085  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:20.894128  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:21.061478  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:21.326806  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:21.394444  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:21.394497  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:21.561762  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:21.827743  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:21.893566  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:21.893622  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:22.061629  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:22.327676  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:22.394205  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:22.394336  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:22.561388  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:22.827248  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:22.893992  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:22.894067  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:23.061608  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:23.327681  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:23.428760  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:23.428856  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:23.560698  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:23.827792  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:23.928241  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:23.928440  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:24.062216  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:24.326597  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:24.393663  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:24.393831  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:24.561724  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:24.827349  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:24.921032  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:58:24.927917  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:24.928098  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:25.061328  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:25.329831  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:25.394301  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:25.395021  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:25.561105  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:58:25.637380  357750 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:25.637422  357750 retry.go:31] will retry after 32.598528911s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:25.827007  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:25.928241  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:25.928257  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:26.061032  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:26.326549  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:26.393518  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:26.393576  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:26.562498  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:26.827861  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:26.895439  357750 kapi.go:107] duration metric: took 1m13.005603908s to wait for kubernetes.io/minikube-addons=registry ...
	I1027 18:58:26.895557  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:27.063452  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:27.328323  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:27.393865  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:27.561098  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:27.826838  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:27.893595  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:28.061732  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:28.326965  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:28.393030  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:28.561609  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:28.827481  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:28.893676  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:29.062386  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:29.326999  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:29.393230  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:29.561952  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:29.827119  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:29.893577  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:30.076698  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:30.327180  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:30.393391  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:30.561569  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:30.827361  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:30.893321  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:31.061418  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:31.327524  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:31.394605  357750 kapi.go:107] duration metric: took 1m17.504813395s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1027 18:58:31.561758  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:31.827696  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:32.061491  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:32.328391  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:32.561642  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:32.827598  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:33.061283  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:33.326862  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:33.561326  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:33.827204  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:34.062093  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:34.326575  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:34.561991  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:34.827165  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:35.061330  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:35.327059  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:35.561430  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:35.827436  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:36.063191  357750 kapi.go:107] duration metric: took 1m15.505512885s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1027 18:58:36.064991  357750 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-589824 cluster.
	I1027 18:58:36.066542  357750 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1027 18:58:36.068014  357750 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1027 18:58:36.327008  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:36.826491  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:37.327757  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:37.828484  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:38.328004  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:38.827672  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:39.327684  357750 kapi.go:107] duration metric: took 1m25.004692059s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1027 18:58:58.236834  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1027 18:58:58.798080  357750 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1027 18:58:58.798248  357750 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1027 18:58:58.800851  357750 out.go:179] * Enabled addons: storage-provisioner, nvidia-device-plugin, registry-creds, amd-gpu-device-plugin, ingress-dns, metrics-server, cloud-spanner, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1027 18:58:58.802363  357750 addons.go:514] duration metric: took 1m46.56875005s for enable addons: enabled=[storage-provisioner nvidia-device-plugin registry-creds amd-gpu-device-plugin ingress-dns metrics-server cloud-spanner yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1027 18:58:58.802415  357750 start.go:246] waiting for cluster config update ...
	I1027 18:58:58.802450  357750 start.go:255] writing updated cluster config ...
	I1027 18:58:58.802809  357750 ssh_runner.go:195] Run: rm -f paused
	I1027 18:58:58.807317  357750 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 18:58:58.811086  357750 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lz5j4" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:58.817707  357750 pod_ready.go:94] pod "coredns-66bc5c9577-lz5j4" is "Ready"
	I1027 18:58:58.817732  357750 pod_ready.go:86] duration metric: took 6.618901ms for pod "coredns-66bc5c9577-lz5j4" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:58.819708  357750 pod_ready.go:83] waiting for pod "etcd-addons-589824" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:58.823737  357750 pod_ready.go:94] pod "etcd-addons-589824" is "Ready"
	I1027 18:58:58.823775  357750 pod_ready.go:86] duration metric: took 4.040563ms for pod "etcd-addons-589824" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:58.825811  357750 pod_ready.go:83] waiting for pod "kube-apiserver-addons-589824" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:58.829755  357750 pod_ready.go:94] pod "kube-apiserver-addons-589824" is "Ready"
	I1027 18:58:58.829783  357750 pod_ready.go:86] duration metric: took 3.94738ms for pod "kube-apiserver-addons-589824" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:58.831775  357750 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-589824" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:59.211764  357750 pod_ready.go:94] pod "kube-controller-manager-addons-589824" is "Ready"
	I1027 18:58:59.211797  357750 pod_ready.go:86] duration metric: took 379.998654ms for pod "kube-controller-manager-addons-589824" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:59.411325  357750 pod_ready.go:83] waiting for pod "kube-proxy-77bv8" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:59.812397  357750 pod_ready.go:94] pod "kube-proxy-77bv8" is "Ready"
	I1027 18:58:59.812432  357750 pod_ready.go:86] duration metric: took 401.078542ms for pod "kube-proxy-77bv8" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:59:00.012528  357750 pod_ready.go:83] waiting for pod "kube-scheduler-addons-589824" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:59:00.412152  357750 pod_ready.go:94] pod "kube-scheduler-addons-589824" is "Ready"
	I1027 18:59:00.412190  357750 pod_ready.go:86] duration metric: took 399.633217ms for pod "kube-scheduler-addons-589824" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:59:00.412209  357750 pod_ready.go:40] duration metric: took 1.604854944s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 18:59:00.459189  357750 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 18:59:00.461488  357750 out.go:179] * Done! kubectl is now configured to use "addons-589824" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 27 19:00:08 addons-589824 crio[774]: time="2025-10-27T19:00:08.589332781Z" level=info msg="Pulling image: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=63c3e0e0-f9df-4c49-b89d-659aef7db9d6 name=/runtime.v1.ImageService/PullImage
	Oct 27 19:00:08 addons-589824 crio[774]: time="2025-10-27T19:00:08.591035996Z" level=info msg="Trying to access \"docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605\""
	Oct 27 19:00:10 addons-589824 crio[774]: time="2025-10-27T19:00:10.090629688Z" level=info msg="Pulled image: docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=63c3e0e0-f9df-4c49-b89d-659aef7db9d6 name=/runtime.v1.ImageService/PullImage
	Oct 27 19:00:10 addons-589824 crio[774]: time="2025-10-27T19:00:10.091223528Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=4cace401-f6dd-4fd9-9287-b1b6ce8eb5af name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:00:10 addons-589824 crio[774]: time="2025-10-27T19:00:10.125392029Z" level=info msg="Checking image status: docker.io/upmcenterprises/registry-creds:1.10@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605" id=3de5fb69-4933-46b2-88bf-9875d696fbc7 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:00:10 addons-589824 crio[774]: time="2025-10-27T19:00:10.129650113Z" level=info msg="Creating container: kube-system/registry-creds-764b6fb674-bmdlm/registry-creds" id=d3ce499d-b4d2-4ae4-9daa-bfe222c28af4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:00:10 addons-589824 crio[774]: time="2025-10-27T19:00:10.129781126Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:00:10 addons-589824 crio[774]: time="2025-10-27T19:00:10.135620625Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:00:10 addons-589824 crio[774]: time="2025-10-27T19:00:10.136130486Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:00:10 addons-589824 crio[774]: time="2025-10-27T19:00:10.16886616Z" level=info msg="Created container 464923ded63802256e17c8a60292e99ea88f070a83988965c20ffcea1a4c7455: kube-system/registry-creds-764b6fb674-bmdlm/registry-creds" id=d3ce499d-b4d2-4ae4-9daa-bfe222c28af4 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:00:10 addons-589824 crio[774]: time="2025-10-27T19:00:10.169632753Z" level=info msg="Starting container: 464923ded63802256e17c8a60292e99ea88f070a83988965c20ffcea1a4c7455" id=51708a69-27c3-40e0-b1eb-f2e522fc21e9 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:00:10 addons-589824 crio[774]: time="2025-10-27T19:00:10.171732353Z" level=info msg="Started container" PID=9076 containerID=464923ded63802256e17c8a60292e99ea88f070a83988965c20ffcea1a4c7455 description=kube-system/registry-creds-764b6fb674-bmdlm/registry-creds id=51708a69-27c3-40e0-b1eb-f2e522fc21e9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e7078dbbd8d27e3e9b23665e7916d9af4d8d238f45cf2d5fb84ea0e2e971dd77
	Oct 27 19:01:35 addons-589824 crio[774]: time="2025-10-27T19:01:35.528478061Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-kz4mk/POD" id=4251f6ce-1793-4455-9a5c-b66e37c28514 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 19:01:35 addons-589824 crio[774]: time="2025-10-27T19:01:35.528603313Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:01:35 addons-589824 crio[774]: time="2025-10-27T19:01:35.535789034Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-kz4mk Namespace:default ID:d7c1eb5a4c6940418f2c44a0dd703ee43e926f9dd10273741ce21b35ac4e7f9f UID:a473b59d-9656-47e6-b6c9-cecfa591f489 NetNS:/var/run/netns/6ea15ccd-5f62-408a-af15-7af0fec427c5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc002889bc8}] Aliases:map[]}"
	Oct 27 19:01:35 addons-589824 crio[774]: time="2025-10-27T19:01:35.535823157Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-kz4mk to CNI network \"kindnet\" (type=ptp)"
	Oct 27 19:01:35 addons-589824 crio[774]: time="2025-10-27T19:01:35.548164542Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-kz4mk Namespace:default ID:d7c1eb5a4c6940418f2c44a0dd703ee43e926f9dd10273741ce21b35ac4e7f9f UID:a473b59d-9656-47e6-b6c9-cecfa591f489 NetNS:/var/run/netns/6ea15ccd-5f62-408a-af15-7af0fec427c5 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc002889bc8}] Aliases:map[]}"
	Oct 27 19:01:35 addons-589824 crio[774]: time="2025-10-27T19:01:35.548352524Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-kz4mk for CNI network kindnet (type=ptp)"
	Oct 27 19:01:35 addons-589824 crio[774]: time="2025-10-27T19:01:35.549552129Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 27 19:01:35 addons-589824 crio[774]: time="2025-10-27T19:01:35.551085899Z" level=info msg="Ran pod sandbox d7c1eb5a4c6940418f2c44a0dd703ee43e926f9dd10273741ce21b35ac4e7f9f with infra container: default/hello-world-app-5d498dc89-kz4mk/POD" id=4251f6ce-1793-4455-9a5c-b66e37c28514 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 19:01:35 addons-589824 crio[774]: time="2025-10-27T19:01:35.552564936Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=222bd4d1-15bf-4dbc-ad0b-1cf188ed29ac name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:01:35 addons-589824 crio[774]: time="2025-10-27T19:01:35.552692754Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=222bd4d1-15bf-4dbc-ad0b-1cf188ed29ac name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:01:35 addons-589824 crio[774]: time="2025-10-27T19:01:35.552728569Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=222bd4d1-15bf-4dbc-ad0b-1cf188ed29ac name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:01:35 addons-589824 crio[774]: time="2025-10-27T19:01:35.553430245Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=071dadcc-cf2d-4e4b-9ed2-b88fa9de4013 name=/runtime.v1.ImageService/PullImage
	Oct 27 19:01:35 addons-589824 crio[774]: time="2025-10-27T19:01:35.558763168Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	464923ded6380       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago   Running             registry-creds                           0                   e7078dbbd8d27       registry-creds-764b6fb674-bmdlm             kube-system
	1bb10742c8175       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e                                              2 minutes ago        Running             nginx                                    0                   ef303f142b2df       nginx                                       default
	87d9cc6838ad3       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago        Running             busybox                                  0                   896ff7f479b31       busybox                                     default
	0a17a4745cc1a       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago        Running             csi-snapshotter                          0                   3ff1c4a47f48e       csi-hostpathplugin-jlszq                    kube-system
	a30f678907200       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago        Running             csi-provisioner                          0                   3ff1c4a47f48e       csi-hostpathplugin-jlszq                    kube-system
	db7343377b388       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            3 minutes ago        Running             liveness-probe                           0                   3ff1c4a47f48e       csi-hostpathplugin-jlszq                    kube-system
	71e53e748e01f       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           3 minutes ago        Running             hostpath                                 0                   3ff1c4a47f48e       csi-hostpathplugin-jlszq                    kube-system
	35b17f5ee8fcc       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 3 minutes ago        Running             gcp-auth                                 0                   b5010a03a9f28       gcp-auth-78565c9fb4-kxlcv                   gcp-auth
	56024f3c5df31       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                3 minutes ago        Running             node-driver-registrar                    0                   3ff1c4a47f48e       csi-hostpathplugin-jlszq                    kube-system
	3f9265ee73822       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            3 minutes ago        Running             gadget                                   0                   d36736f931f9f       gadget-vwv62                                gadget
	3d43e3819d86b       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             3 minutes ago        Running             controller                               0                   bfc3ba45a4d88       ingress-nginx-controller-675c5ddd98-kvnzw   ingress-nginx
	ef768854ff282       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              3 minutes ago        Running             registry-proxy                           0                   584cfb3fe1579       registry-proxy-62t66                        kube-system
	76e187a284766       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago        Running             csi-external-health-monitor-controller   0                   3ff1c4a47f48e       csi-hostpathplugin-jlszq                    kube-system
	0c23d9067a021       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   f4770ec7a0bc0       snapshot-controller-7d9fbc56b8-jx9vc        kube-system
	6feb37f12d4a3       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago        Running             amd-gpu-device-plugin                    0                   d4345a36e61ed       amd-gpu-device-plugin-6nrwh                 kube-system
	fb6a38bfcaa08       08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2                                                                             3 minutes ago        Exited              patch                                    2                   030a7783d8ba6       ingress-nginx-admission-patch-l7t7k         ingress-nginx
	2dc898f8fa5b3       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago        Running             csi-attacher                             0                   a4ffd4da5cb72       csi-hostpath-attacher-0                     kube-system
	27f1c94c3f573       nvcr.io/nvidia/k8s-device-plugin@sha256:ad155f1089b64673c75b2f39258f0791cbad6d3011419726ec605196981e1c32                                     3 minutes ago        Running             nvidia-device-plugin-ctr                 0                   0725d21aa9560       nvidia-device-plugin-daemonset-5m5rl        kube-system
	b7494b1ab076b       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   9ee30ec4a8aba       snapshot-controller-7d9fbc56b8-m2794        kube-system
	4462c756941cb       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago        Running             yakd                                     0                   d53d106603bb2       yakd-dashboard-5ff678cb9-m5mql              yakd-dashboard
	cfcad9faa243a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   3 minutes ago        Exited              create                                   0                   68107bfbb0309       ingress-nginx-admission-create-j8h7h        ingress-nginx
	2f642c7cbe909       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago        Running             csi-resizer                              0                   2616723d54001       csi-hostpath-resizer-0                      kube-system
	9fe3aa823f5d8       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago        Running             local-path-provisioner                   0                   430826c98a473       local-path-provisioner-648f6765c9-qkqkp     local-path-storage
	ca7a93241189c       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago        Running             metrics-server                           0                   5193d690f4c91       metrics-server-85b7d694d7-6mqmx             kube-system
	2095fff763068       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago        Running             registry                                 0                   e0005252edc64       registry-6b586f9694-bvh6h                   kube-system
	8b8b3dcbd1000       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               3 minutes ago        Running             cloud-spanner-emulator                   0                   6e33a7c436845       cloud-spanner-emulator-86bd5cbb97-rt6dx     default
	eede6880efbc9       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago        Running             minikube-ingress-dns                     0                   b4a896ee94dee       kube-ingress-dns-minikube                   kube-system
	ba1ddd191addf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago        Running             storage-provisioner                      0                   a2c466ed164d0       storage-provisioner                         kube-system
	abbe027d3dc3b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago        Running             coredns                                  0                   fb92d690a7a5a       coredns-66bc5c9577-lz5j4                    kube-system
	12e10d7e88fff       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             4 minutes ago        Running             kube-proxy                               0                   93c2925316a16       kube-proxy-77bv8                            kube-system
	6d05a2b6be1fb       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago        Running             kindnet-cni                              0                   233fe93a3f9c0       kindnet-4rz7d                               kube-system
	c02f8fc8e6a73       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             4 minutes ago        Running             kube-apiserver                           0                   a6e9125590762       kube-apiserver-addons-589824                kube-system
	95468d8526bae       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             4 minutes ago        Running             kube-scheduler                           0                   f29cecb516462       kube-scheduler-addons-589824                kube-system
	81cd0a11514ab       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             4 minutes ago        Running             kube-controller-manager                  0                   cac3717bc1745       kube-controller-manager-addons-589824       kube-system
	f25d173d59b5b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             4 minutes ago        Running             etcd                                     0                   969beb9641f34       etcd-addons-589824                          kube-system
	
	
	==> coredns [abbe027d3dc3b813b338a56e8cabab82e03eb9b112b7b850abb79fefe6d06ad7] <==
	[INFO] 10.244.0.22:60552 - 59972 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.005993489s
	[INFO] 10.244.0.22:54315 - 43099 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005123142s
	[INFO] 10.244.0.22:35223 - 22024 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006175629s
	[INFO] 10.244.0.22:38943 - 65517 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004099658s
	[INFO] 10.244.0.22:32982 - 64160 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00563234s
	[INFO] 10.244.0.22:48886 - 21891 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002490605s
	[INFO] 10.244.0.22:53042 - 55437 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002620466s
	[INFO] 10.244.0.25:59008 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00023311s
	[INFO] 10.244.0.25:49058 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00017685s
	[INFO] 10.244.0.31:47203 - 32968 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000242754s
	[INFO] 10.244.0.31:32954 - 57304 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000331248s
	[INFO] 10.244.0.31:43440 - 56081 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000159713s
	[INFO] 10.244.0.31:37898 - 45825 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000220035s
	[INFO] 10.244.0.31:59514 - 2509 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000119121s
	[INFO] 10.244.0.31:43473 - 46609 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.00009442s
	[INFO] 10.244.0.31:51503 - 34966 "AAAA IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.003678932s
	[INFO] 10.244.0.31:52864 - 45993 "A IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.00452455s
	[INFO] 10.244.0.31:44058 - 41672 "A IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.00477428s
	[INFO] 10.244.0.31:33367 - 17905 "AAAA IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.006193996s
	[INFO] 10.244.0.31:60889 - 33820 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004444735s
	[INFO] 10.244.0.31:55840 - 33435 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004943577s
	[INFO] 10.244.0.31:42696 - 10594 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.005089454s
	[INFO] 10.244.0.31:48106 - 6832 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.005890466s
	[INFO] 10.244.0.31:57109 - 40818 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001878036s
	[INFO] 10.244.0.31:58656 - 44325 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001897609s
	
	
	==> describe nodes <==
	Name:               addons-589824
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-589824
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=addons-589824
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T18_57_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-589824
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-589824"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 18:57:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-589824
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 19:01:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 19:00:40 +0000   Mon, 27 Oct 2025 18:57:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 19:00:40 +0000   Mon, 27 Oct 2025 18:57:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 19:00:40 +0000   Mon, 27 Oct 2025 18:57:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 19:00:40 +0000   Mon, 27 Oct 2025 18:57:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-589824
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                015f8eae-8878-4d4d-8c23-64412d4db92c
	  Boot ID:                    811bd29c-e64e-4acc-9427-bab1f7caed93
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m36s
	  default                     cloud-spanner-emulator-86bd5cbb97-rt6dx      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	  default                     hello-world-app-5d498dc89-kz4mk              0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m27s
	  gadget                      gadget-vwv62                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	  gcp-auth                    gcp-auth-78565c9fb4-kxlcv                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-kvnzw    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m23s
	  kube-system                 amd-gpu-device-plugin-6nrwh                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 coredns-66bc5c9577-lz5j4                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m24s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 csi-hostpathplugin-jlszq                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 etcd-addons-589824                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m30s
	  kube-system                 kindnet-4rz7d                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m25s
	  kube-system                 kube-apiserver-addons-589824                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 kube-controller-manager-addons-589824        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 kube-proxy-77bv8                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	  kube-system                 kube-scheduler-addons-589824                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m30s
	  kube-system                 metrics-server-85b7d694d7-6mqmx              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m23s
	  kube-system                 nvidia-device-plugin-daemonset-5m5rl         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 registry-6b586f9694-bvh6h                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 registry-creds-764b6fb674-bmdlm              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	  kube-system                 registry-proxy-62t66                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m43s
	  kube-system                 snapshot-controller-7d9fbc56b8-jx9vc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 snapshot-controller-7d9fbc56b8-m2794         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	  local-path-storage          local-path-provisioner-648f6765c9-qkqkp      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m23s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-m5mql               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m24s                  kube-proxy       
	  Normal  Starting                 4m35s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m35s (x8 over 4m35s)  kubelet          Node addons-589824 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m35s (x8 over 4m35s)  kubelet          Node addons-589824 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m35s (x8 over 4m35s)  kubelet          Node addons-589824 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m30s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m30s                  kubelet          Node addons-589824 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m30s                  kubelet          Node addons-589824 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m30s                  kubelet          Node addons-589824 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m26s                  node-controller  Node addons-589824 event: Registered Node addons-589824 in Controller
	  Normal  NodeReady                3m43s                  kubelet          Node addons-589824 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 23 52 43 9a ba 08 06
	[  +0.000398] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 50 95 0e df 53 08 06
	[Oct27 18:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.017295] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023893] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023882] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023851] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +2.047849] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +4.031592] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +8.319143] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[ +16.382183] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[Oct27 19:00] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	
	
	==> etcd [f25d173d59b5ba978f27e915fc30ff6e02ab5bba952c2af598b464a59edc1987] <==
	{"level":"warn","ts":"2025-10-27T18:57:03.377364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:03.384670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:03.391302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:03.402791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:03.409914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:03.416593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:03.463681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:14.775968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:40.870645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:40.877417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:40.898551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:40.905511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:58:06.992926Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"120.043517ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/ingress-nginx/ingress-nginx-admission\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T18:58:06.993020Z","caller":"traceutil/trace.go:172","msg":"trace[1554996425] range","detail":"{range_begin:/registry/secrets/ingress-nginx/ingress-nginx-admission; range_end:; response_count:0; response_revision:1028; }","duration":"120.155798ms","start":"2025-10-27T18:58:06.872848Z","end":"2025-10-27T18:58:06.993004Z","steps":["trace[1554996425] 'agreement among raft nodes before linearized reading'  (duration: 94.036969ms)","trace[1554996425] 'range keys from in-memory index tree'  (duration: 25.967966ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-27T18:58:06.993031Z","caller":"traceutil/trace.go:172","msg":"trace[1964619258] transaction","detail":"{read_only:false; response_revision:1029; number_of_response:1; }","duration":"126.830572ms","start":"2025-10-27T18:58:06.866182Z","end":"2025-10-27T18:58:06.993013Z","steps":["trace[1964619258] 'process raft request'  (duration: 100.764414ms)","trace[1964619258] 'compare'  (duration: 25.933927ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T18:58:07.015036Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.438632ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-10-27T18:58:07.015078Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.491309ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T18:58:07.015103Z","caller":"traceutil/trace.go:172","msg":"trace[1533825363] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1029; }","duration":"123.521397ms","start":"2025-10-27T18:58:06.891568Z","end":"2025-10-27T18:58:07.015089Z","steps":["trace[1533825363] 'agreement among raft nodes before linearized reading'  (duration: 123.386123ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T18:58:07.015146Z","caller":"traceutil/trace.go:172","msg":"trace[281230080] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1029; }","duration":"123.554779ms","start":"2025-10-27T18:58:06.891567Z","end":"2025-10-27T18:58:07.015122Z","steps":["trace[281230080] 'agreement among raft nodes before linearized reading'  (duration: 123.449214ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T18:58:07.015199Z","caller":"traceutil/trace.go:172","msg":"trace[935681801] transaction","detail":"{read_only:false; response_revision:1030; number_of_response:1; }","duration":"125.682407ms","start":"2025-10-27T18:58:06.889499Z","end":"2025-10-27T18:58:07.015182Z","steps":["trace[935681801] 'process raft request'  (duration: 125.498666ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T18:58:07.015239Z","caller":"traceutil/trace.go:172","msg":"trace[502819226] transaction","detail":"{read_only:false; response_revision:1031; number_of_response:1; }","duration":"122.092514ms","start":"2025-10-27T18:58:06.893092Z","end":"2025-10-27T18:58:07.015184Z","steps":["trace[502819226] 'process raft request'  (duration: 122.01453ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T18:58:07.142675Z","caller":"traceutil/trace.go:172","msg":"trace[2086540696] transaction","detail":"{read_only:false; response_revision:1033; number_of_response:1; }","duration":"123.049421ms","start":"2025-10-27T18:58:07.019602Z","end":"2025-10-27T18:58:07.142651Z","steps":["trace[2086540696] 'process raft request'  (duration: 100.272325ms)","trace[2086540696] 'compare'  (duration: 22.576854ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-27T18:58:07.142722Z","caller":"traceutil/trace.go:172","msg":"trace[26412430] transaction","detail":"{read_only:false; response_revision:1034; number_of_response:1; }","duration":"123.097313ms","start":"2025-10-27T18:58:07.019610Z","end":"2025-10-27T18:58:07.142707Z","steps":["trace[26412430] 'process raft request'  (duration: 122.985372ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T18:58:07.142703Z","caller":"traceutil/trace.go:172","msg":"trace[866901254] transaction","detail":"{read_only:false; response_revision:1035; number_of_response:1; }","duration":"121.909722ms","start":"2025-10-27T18:58:07.020779Z","end":"2025-10-27T18:58:07.142689Z","steps":["trace[866901254] 'process raft request'  (duration: 121.852603ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T18:58:07.347093Z","caller":"traceutil/trace.go:172","msg":"trace[443095031] transaction","detail":"{read_only:false; response_revision:1038; number_of_response:1; }","duration":"118.643352ms","start":"2025-10-27T18:58:07.228411Z","end":"2025-10-27T18:58:07.347054Z","steps":["trace[443095031] 'process raft request'  (duration: 72.528344ms)","trace[443095031] 'compare'  (duration: 45.946959ms)"],"step_count":2}
	
	
	==> gcp-auth [35b17f5ee8fcc21b55af114698fc6422309350b8004ce05ffbfa88cc4ddc1d83] <==
	2025/10/27 18:58:35 GCP Auth Webhook started!
	2025/10/27 18:59:00 Ready to marshal response ...
	2025/10/27 18:59:00 Ready to write response ...
	2025/10/27 18:59:00 Ready to marshal response ...
	2025/10/27 18:59:00 Ready to write response ...
	2025/10/27 18:59:01 Ready to marshal response ...
	2025/10/27 18:59:01 Ready to write response ...
	2025/10/27 18:59:09 Ready to marshal response ...
	2025/10/27 18:59:09 Ready to write response ...
	2025/10/27 18:59:19 Ready to marshal response ...
	2025/10/27 18:59:19 Ready to write response ...
	2025/10/27 18:59:20 Ready to marshal response ...
	2025/10/27 18:59:20 Ready to write response ...
	2025/10/27 18:59:20 Ready to marshal response ...
	2025/10/27 18:59:20 Ready to write response ...
	2025/10/27 18:59:27 Ready to marshal response ...
	2025/10/27 18:59:27 Ready to write response ...
	2025/10/27 18:59:31 Ready to marshal response ...
	2025/10/27 18:59:31 Ready to write response ...
	2025/10/27 18:59:50 Ready to marshal response ...
	2025/10/27 18:59:50 Ready to write response ...
	2025/10/27 19:01:35 Ready to marshal response ...
	2025/10/27 19:01:35 Ready to write response ...
	
	
	==> kernel <==
	 19:01:37 up  1:44,  0 user,  load average: 0.47, 0.79, 0.61
	Linux addons-589824 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6d05a2b6be1fb2b8475a215eb50681a592a20257978b9da0091741666c9fa5c6] <==
	I1027 18:59:32.583386       1 main.go:301] handling current node
	I1027 18:59:42.583454       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 18:59:42.583493       1 main.go:301] handling current node
	I1027 18:59:52.583688       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 18:59:52.583724       1 main.go:301] handling current node
	I1027 19:00:02.584471       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:00:02.584507       1 main.go:301] handling current node
	I1027 19:00:12.583118       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:00:12.583178       1 main.go:301] handling current node
	I1027 19:00:22.584511       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:00:22.584575       1 main.go:301] handling current node
	I1027 19:00:32.583778       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:00:32.583821       1 main.go:301] handling current node
	I1027 19:00:42.583888       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:00:42.583922       1 main.go:301] handling current node
	I1027 19:00:52.583430       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:00:52.583478       1 main.go:301] handling current node
	I1027 19:01:02.583963       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:01:02.583991       1 main.go:301] handling current node
	I1027 19:01:12.584197       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:01:12.584228       1 main.go:301] handling current node
	I1027 19:01:22.583839       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:01:22.583894       1 main.go:301] handling current node
	I1027 19:01:32.584424       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:01:32.584459       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c02f8fc8e6a7392b824780b7cf27bac4f0cee905aafadcc2295bf2775ce85316] <==
	W1027 18:57:40.870571       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1027 18:57:40.877372       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1027 18:57:40.898441       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1027 18:57:40.905516       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1027 18:57:53.161858       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.137.58:443: connect: connection refused
	E1027 18:57:53.161906       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.137.58:443: connect: connection refused" logger="UnhandledError"
	W1027 18:57:53.161906       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.137.58:443: connect: connection refused
	E1027 18:57:53.161937       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.137.58:443: connect: connection refused" logger="UnhandledError"
	W1027 18:57:53.184361       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.137.58:443: connect: connection refused
	E1027 18:57:53.184402       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.137.58:443: connect: connection refused" logger="UnhandledError"
	W1027 18:57:53.190689       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.137.58:443: connect: connection refused
	E1027 18:57:53.190731       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.137.58:443: connect: connection refused" logger="UnhandledError"
	W1027 18:58:07.214814       1 handler_proxy.go:99] no RequestInfo found in the context
	E1027 18:58:07.214920       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1027 18:58:07.214986       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.96.153:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.96.153:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.96.153:443: connect: connection refused" logger="UnhandledError"
	I1027 18:58:07.225091       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1027 18:59:09.154764       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43684: use of closed network connection
	E1027 18:59:09.307799       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43704: use of closed network connection
	I1027 18:59:09.826826       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1027 18:59:10.052543       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.101.92"}
	I1027 18:59:41.278802       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1027 19:01:35.296949       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.114.211"}
	
	
	==> kube-controller-manager [81cd0a11514aba345e443fd708bb0a4b65a29f336aec8643a57037ceeda8aefe] <==
	I1027 18:57:10.849920       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 18:57:10.849700       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1027 18:57:10.849989       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1027 18:57:10.853953       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1027 18:57:10.854073       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1027 18:57:10.854124       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1027 18:57:10.854189       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1027 18:57:10.854202       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1027 18:57:10.854209       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1027 18:57:10.855289       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1027 18:57:10.857581       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 18:57:10.859949       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 18:57:10.861328       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-589824" podCIDRs=["10.244.0.0/24"]
	I1027 18:57:10.864429       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1027 18:57:10.871834       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 18:57:10.880550       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1027 18:57:13.491484       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1027 18:57:40.864499       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1027 18:57:40.864660       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1027 18:57:40.864710       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1027 18:57:40.888507       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1027 18:57:40.892444       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1027 18:57:40.965959       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 18:57:40.993534       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 18:57:55.806649       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [12e10d7e88fff07d51f12a561be95b0933cdc57cc59e0f478fe8964c53f1806b] <==
	I1027 18:57:12.168054       1 server_linux.go:53] "Using iptables proxy"
	I1027 18:57:12.258929       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 18:57:12.362228       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 18:57:12.362789       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1027 18:57:12.362910       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 18:57:12.526041       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 18:57:12.526210       1 server_linux.go:132] "Using iptables Proxier"
	I1027 18:57:12.536106       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 18:57:12.536692       1 server.go:527] "Version info" version="v1.34.1"
	I1027 18:57:12.537167       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 18:57:12.539945       1 config.go:200] "Starting service config controller"
	I1027 18:57:12.542650       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 18:57:12.542172       1 config.go:106] "Starting endpoint slice config controller"
	I1027 18:57:12.542708       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 18:57:12.542205       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 18:57:12.542721       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 18:57:12.542399       1 config.go:309] "Starting node config controller"
	I1027 18:57:12.542732       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 18:57:12.542738       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 18:57:12.642805       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 18:57:12.643041       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 18:57:12.643252       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [95468d8526baeb9ed07c582a77c3593017052fb17f3ce84741a67f91794b7400] <==
	E1027 18:57:03.876200       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 18:57:03.876479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 18:57:03.876527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 18:57:03.876576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 18:57:03.876622       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 18:57:03.876671       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 18:57:03.876715       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 18:57:03.876766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 18:57:03.876810       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 18:57:03.876944       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 18:57:03.877033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 18:57:03.878529       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 18:57:03.878709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1027 18:57:03.879478       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 18:57:04.703618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 18:57:04.814766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 18:57:04.831343       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 18:57:04.838049       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 18:57:04.839062       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 18:57:04.909702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 18:57:04.914847       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 18:57:05.048524       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 18:57:05.088274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1027 18:57:05.152497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1027 18:57:08.270065       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 18:59:59 addons-589824 kubelet[1300]: I1027 18:59:59.106472    1300 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf8rp\" (UniqueName: \"kubernetes.io/projected/006cb30b-6c67-4c7e-a263-3060e06b2ad3-kube-api-access-xf8rp\") pod \"006cb30b-6c67-4c7e-a263-3060e06b2ad3\" (UID: \"006cb30b-6c67-4c7e-a263-3060e06b2ad3\") "
	Oct 27 18:59:59 addons-589824 kubelet[1300]: I1027 18:59:59.106626    1300 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^1d2234b1-b367-11f0-a8be-c2ace865df9d\") pod \"006cb30b-6c67-4c7e-a263-3060e06b2ad3\" (UID: \"006cb30b-6c67-4c7e-a263-3060e06b2ad3\") "
	Oct 27 18:59:59 addons-589824 kubelet[1300]: I1027 18:59:59.106665    1300 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/006cb30b-6c67-4c7e-a263-3060e06b2ad3-gcp-creds\") pod \"006cb30b-6c67-4c7e-a263-3060e06b2ad3\" (UID: \"006cb30b-6c67-4c7e-a263-3060e06b2ad3\") "
	Oct 27 18:59:59 addons-589824 kubelet[1300]: I1027 18:59:59.106799    1300 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/006cb30b-6c67-4c7e-a263-3060e06b2ad3-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "006cb30b-6c67-4c7e-a263-3060e06b2ad3" (UID: "006cb30b-6c67-4c7e-a263-3060e06b2ad3"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Oct 27 18:59:59 addons-589824 kubelet[1300]: I1027 18:59:59.108910    1300 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/006cb30b-6c67-4c7e-a263-3060e06b2ad3-kube-api-access-xf8rp" (OuterVolumeSpecName: "kube-api-access-xf8rp") pod "006cb30b-6c67-4c7e-a263-3060e06b2ad3" (UID: "006cb30b-6c67-4c7e-a263-3060e06b2ad3"). InnerVolumeSpecName "kube-api-access-xf8rp". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 27 18:59:59 addons-589824 kubelet[1300]: I1027 18:59:59.109619    1300 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^1d2234b1-b367-11f0-a8be-c2ace865df9d" (OuterVolumeSpecName: "task-pv-storage") pod "006cb30b-6c67-4c7e-a263-3060e06b2ad3" (UID: "006cb30b-6c67-4c7e-a263-3060e06b2ad3"). InnerVolumeSpecName "pvc-578e2198-7296-4859-a4ab-ff948cffb10a". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Oct 27 18:59:59 addons-589824 kubelet[1300]: I1027 18:59:59.207954    1300 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/006cb30b-6c67-4c7e-a263-3060e06b2ad3-gcp-creds\") on node \"addons-589824\" DevicePath \"\""
	Oct 27 18:59:59 addons-589824 kubelet[1300]: I1027 18:59:59.207990    1300 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xf8rp\" (UniqueName: \"kubernetes.io/projected/006cb30b-6c67-4c7e-a263-3060e06b2ad3-kube-api-access-xf8rp\") on node \"addons-589824\" DevicePath \"\""
	Oct 27 18:59:59 addons-589824 kubelet[1300]: I1027 18:59:59.208024    1300 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-578e2198-7296-4859-a4ab-ff948cffb10a\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^1d2234b1-b367-11f0-a8be-c2ace865df9d\") on node \"addons-589824\" "
	Oct 27 18:59:59 addons-589824 kubelet[1300]: I1027 18:59:59.212859    1300 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-578e2198-7296-4859-a4ab-ff948cffb10a" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^1d2234b1-b367-11f0-a8be-c2ace865df9d") on node "addons-589824"
	Oct 27 18:59:59 addons-589824 kubelet[1300]: I1027 18:59:59.309066    1300 reconciler_common.go:299] "Volume detached for volume \"pvc-578e2198-7296-4859-a4ab-ff948cffb10a\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^1d2234b1-b367-11f0-a8be-c2ace865df9d\") on node \"addons-589824\" DevicePath \"\""
	Oct 27 18:59:59 addons-589824 kubelet[1300]: I1027 18:59:59.321791    1300 scope.go:117] "RemoveContainer" containerID="5c995cfaa536e708ec458f5217eb2a73c774745b6a4fec83a6419fa7962530ea"
	Oct 27 18:59:59 addons-589824 kubelet[1300]: I1027 18:59:59.331435    1300 scope.go:117] "RemoveContainer" containerID="5c995cfaa536e708ec458f5217eb2a73c774745b6a4fec83a6419fa7962530ea"
	Oct 27 18:59:59 addons-589824 kubelet[1300]: E1027 18:59:59.331884    1300 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c995cfaa536e708ec458f5217eb2a73c774745b6a4fec83a6419fa7962530ea\": container with ID starting with 5c995cfaa536e708ec458f5217eb2a73c774745b6a4fec83a6419fa7962530ea not found: ID does not exist" containerID="5c995cfaa536e708ec458f5217eb2a73c774745b6a4fec83a6419fa7962530ea"
	Oct 27 18:59:59 addons-589824 kubelet[1300]: I1027 18:59:59.331935    1300 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c995cfaa536e708ec458f5217eb2a73c774745b6a4fec83a6419fa7962530ea"} err="failed to get container status \"5c995cfaa536e708ec458f5217eb2a73c774745b6a4fec83a6419fa7962530ea\": rpc error: code = NotFound desc = could not find container \"5c995cfaa536e708ec458f5217eb2a73c774745b6a4fec83a6419fa7962530ea\": container with ID starting with 5c995cfaa536e708ec458f5217eb2a73c774745b6a4fec83a6419fa7962530ea not found: ID does not exist"
	Oct 27 19:00:00 addons-589824 kubelet[1300]: I1027 19:00:00.567451    1300 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="006cb30b-6c67-4c7e-a263-3060e06b2ad3" path="/var/lib/kubelet/pods/006cb30b-6c67-4c7e-a263-3060e06b2ad3/volumes"
	Oct 27 19:00:06 addons-589824 kubelet[1300]: I1027 19:00:06.593512    1300 scope.go:117] "RemoveContainer" containerID="dda819cbed8b5d8da77fef7960cc23c85869ffeb2771c264275736ed6db1045c"
	Oct 27 19:00:06 addons-589824 kubelet[1300]: I1027 19:00:06.602501    1300 scope.go:117] "RemoveContainer" containerID="072d53ba0abeb562cbffc04ca88daa8e7a807337b8488f82c748efe7d79ab323"
	Oct 27 19:00:06 addons-589824 kubelet[1300]: I1027 19:00:06.610779    1300 scope.go:117] "RemoveContainer" containerID="2f694202ba767a270fc55276859f621db36fd73fe58e1bdc44d0ef279d311038"
	Oct 27 19:00:42 addons-589824 kubelet[1300]: I1027 19:00:42.565229    1300 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-6nrwh" secret="" err="secret \"gcp-auth\" not found"
	Oct 27 19:00:46 addons-589824 kubelet[1300]: I1027 19:00:46.566903    1300 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-5m5rl" secret="" err="secret \"gcp-auth\" not found"
	Oct 27 19:01:11 addons-589824 kubelet[1300]: I1027 19:01:11.565421    1300 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-62t66" secret="" err="secret \"gcp-auth\" not found"
	Oct 27 19:01:35 addons-589824 kubelet[1300]: I1027 19:01:35.220500    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-bmdlm" podStartSLOduration=260.717306485 podStartE2EDuration="4m22.220473221s" podCreationTimestamp="2025-10-27 18:57:13 +0000 UTC" firstStartedPulling="2025-10-27 19:00:08.589020139 +0000 UTC m=+182.111341467" lastFinishedPulling="2025-10-27 19:00:10.092186862 +0000 UTC m=+183.614508203" observedRunningTime="2025-10-27 19:00:10.387728246 +0000 UTC m=+183.910049593" watchObservedRunningTime="2025-10-27 19:01:35.220473221 +0000 UTC m=+268.742794570"
	Oct 27 19:01:35 addons-589824 kubelet[1300]: I1027 19:01:35.319146    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/a473b59d-9656-47e6-b6c9-cecfa591f489-gcp-creds\") pod \"hello-world-app-5d498dc89-kz4mk\" (UID: \"a473b59d-9656-47e6-b6c9-cecfa591f489\") " pod="default/hello-world-app-5d498dc89-kz4mk"
	Oct 27 19:01:35 addons-589824 kubelet[1300]: I1027 19:01:35.319225    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct6fs\" (UniqueName: \"kubernetes.io/projected/a473b59d-9656-47e6-b6c9-cecfa591f489-kube-api-access-ct6fs\") pod \"hello-world-app-5d498dc89-kz4mk\" (UID: \"a473b59d-9656-47e6-b6c9-cecfa591f489\") " pod="default/hello-world-app-5d498dc89-kz4mk"
	
	
	==> storage-provisioner [ba1ddd191addfbafb743bfd31989a110bd5b0f58f7479075c129e528745e7798] <==
	W1027 19:01:12.529860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:14.533688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:14.537847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:16.540904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:16.545110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:18.548964       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:18.553089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:20.556920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:20.562279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:22.565584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:22.570320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:24.573947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:24.577627       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:26.581284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:26.586988       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:28.590054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:28.594256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:30.597713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:30.602096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:32.604839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:32.608903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:34.612704       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:34.617385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:36.621331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:01:36.625907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-589824 -n addons-589824
helpers_test.go:269: (dbg) Run:  kubectl --context addons-589824 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-j8h7h ingress-nginx-admission-patch-l7t7k
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-589824 describe pod ingress-nginx-admission-create-j8h7h ingress-nginx-admission-patch-l7t7k
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-589824 describe pod ingress-nginx-admission-create-j8h7h ingress-nginx-admission-patch-l7t7k: exit status 1 (71.406925ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-j8h7h" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-l7t7k" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-589824 describe pod ingress-nginx-admission-create-j8h7h ingress-nginx-admission-patch-l7t7k: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-589824 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-589824 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (258.809164ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:01:38.017210  372305 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:01:38.017494  372305 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:01:38.017508  372305 out.go:374] Setting ErrFile to fd 2...
	I1027 19:01:38.017515  372305 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:01:38.017735  372305 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 19:01:38.017997  372305 mustload.go:65] Loading cluster: addons-589824
	I1027 19:01:38.018372  372305 config.go:182] Loaded profile config "addons-589824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:01:38.018392  372305 addons.go:606] checking whether the cluster is paused
	I1027 19:01:38.018486  372305 config.go:182] Loaded profile config "addons-589824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:01:38.018504  372305 host.go:66] Checking if "addons-589824" exists ...
	I1027 19:01:38.018891  372305 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 19:01:38.037250  372305 ssh_runner.go:195] Run: systemctl --version
	I1027 19:01:38.037311  372305 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 19:01:38.055528  372305 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 19:01:38.156542  372305 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 19:01:38.156632  372305 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 19:01:38.189149  372305 cri.go:89] found id: "464923ded63802256e17c8a60292e99ea88f070a83988965c20ffcea1a4c7455"
	I1027 19:01:38.189197  372305 cri.go:89] found id: "0a17a4745cc1a6104ea6432d9fd60dac6e6abe764b5d1330d69426fa0b74a6ab"
	I1027 19:01:38.189201  372305 cri.go:89] found id: "a30f678907200483df6ff7630d767bc8daa14ce81d7f9088b61ad45ee3d0afab"
	I1027 19:01:38.189204  372305 cri.go:89] found id: "db7343377b38897cf4a8cf603f6e486663fecd5587924e1ed818db6d54bdcce6"
	I1027 19:01:38.189208  372305 cri.go:89] found id: "71e53e748e01fc8c91ffa4fb8b7865bea26bcbe65dcba958949295c6f0037da7"
	I1027 19:01:38.189213  372305 cri.go:89] found id: "56024f3c5df317e559a2fc01d91706e2a21e755612591d33569756c8b235a739"
	I1027 19:01:38.189216  372305 cri.go:89] found id: "ef768854ff28223563c69a32d2834fab10262b7e6a6963c625600582d59b9e51"
	I1027 19:01:38.189220  372305 cri.go:89] found id: "76e187a2847661d9eb59daefd89617bc458e7238cd87c5b6b4e6c6f1884d4826"
	I1027 19:01:38.189223  372305 cri.go:89] found id: "0c23d9067a021958f6e78dae17e3e314bb8f01a59a277d6d231a1c91ac243402"
	I1027 19:01:38.189236  372305 cri.go:89] found id: "6feb37f12d4a362a4be9862cfb4d525092b27f5c8806b5fe7f3e6992e40865b1"
	I1027 19:01:38.189239  372305 cri.go:89] found id: "2dc898f8fa5b3f56f21afaa0584bf9b0ee67ad474e08c141d382bf6352ffb103"
	I1027 19:01:38.189242  372305 cri.go:89] found id: "27f1c94c3f5736bca109359ef14c6315dca30f3a92e432a313912785f638d339"
	I1027 19:01:38.189244  372305 cri.go:89] found id: "b7494b1ab076bec5211fe9aa45d869fd06dce709b51652f81a21756c0087c5dc"
	I1027 19:01:38.189247  372305 cri.go:89] found id: "2f642c7cbe9094287b843be457ec991af2d6a4e3a7c89d0cef2628b88a0df390"
	I1027 19:01:38.189250  372305 cri.go:89] found id: "ca7a93241189c56d1808a8b7fb428d8057429bed2f6554b65716f5aeecd49b88"
	I1027 19:01:38.189256  372305 cri.go:89] found id: "2095fff76306861533792ed7f54dec0997d67f3656557a857ff7af3b00429cda"
	I1027 19:01:38.189262  372305 cri.go:89] found id: "eede6880efbc9e505b955efd78f6cc85e44d1edb5f142fe3df44034a4341a14f"
	I1027 19:01:38.189266  372305 cri.go:89] found id: "ba1ddd191addfbafb743bfd31989a110bd5b0f58f7479075c129e528745e7798"
	I1027 19:01:38.189268  372305 cri.go:89] found id: "abbe027d3dc3b813b338a56e8cabab82e03eb9b112b7b850abb79fefe6d06ad7"
	I1027 19:01:38.189270  372305 cri.go:89] found id: "12e10d7e88fff07d51f12a561be95b0933cdc57cc59e0f478fe8964c53f1806b"
	I1027 19:01:38.189273  372305 cri.go:89] found id: "6d05a2b6be1fb2b8475a215eb50681a592a20257978b9da0091741666c9fa5c6"
	I1027 19:01:38.189275  372305 cri.go:89] found id: "c02f8fc8e6a7392b824780b7cf27bac4f0cee905aafadcc2295bf2775ce85316"
	I1027 19:01:38.189278  372305 cri.go:89] found id: "95468d8526baeb9ed07c582a77c3593017052fb17f3ce84741a67f91794b7400"
	I1027 19:01:38.189280  372305 cri.go:89] found id: "81cd0a11514aba345e443fd708bb0a4b65a29f336aec8643a57037ceeda8aefe"
	I1027 19:01:38.189282  372305 cri.go:89] found id: "f25d173d59b5ba978f27e915fc30ff6e02ab5bba952c2af598b464a59edc1987"
	I1027 19:01:38.189285  372305 cri.go:89] found id: ""
	I1027 19:01:38.189344  372305 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:01:38.204575  372305 out.go:203] 
	W1027 19:01:38.205814  372305 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:01:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:01:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 19:01:38.205852  372305 out.go:285] * 
	* 
	W1027 19:01:38.210180  372305 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 19:01:38.211741  372305 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-589824 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-589824 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-589824 addons disable ingress --alsologtostderr -v=1: exit status 11 (263.860446ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:01:38.277455  372384 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:01:38.277729  372384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:01:38.277739  372384 out.go:374] Setting ErrFile to fd 2...
	I1027 19:01:38.277744  372384 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:01:38.277939  372384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 19:01:38.278270  372384 mustload.go:65] Loading cluster: addons-589824
	I1027 19:01:38.278658  372384 config.go:182] Loaded profile config "addons-589824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:01:38.278677  372384 addons.go:606] checking whether the cluster is paused
	I1027 19:01:38.278758  372384 config.go:182] Loaded profile config "addons-589824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:01:38.278776  372384 host.go:66] Checking if "addons-589824" exists ...
	I1027 19:01:38.279245  372384 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 19:01:38.298410  372384 ssh_runner.go:195] Run: systemctl --version
	I1027 19:01:38.298476  372384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 19:01:38.317716  372384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 19:01:38.419332  372384 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 19:01:38.419450  372384 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 19:01:38.451921  372384 cri.go:89] found id: "464923ded63802256e17c8a60292e99ea88f070a83988965c20ffcea1a4c7455"
	I1027 19:01:38.451949  372384 cri.go:89] found id: "0a17a4745cc1a6104ea6432d9fd60dac6e6abe764b5d1330d69426fa0b74a6ab"
	I1027 19:01:38.451954  372384 cri.go:89] found id: "a30f678907200483df6ff7630d767bc8daa14ce81d7f9088b61ad45ee3d0afab"
	I1027 19:01:38.451958  372384 cri.go:89] found id: "db7343377b38897cf4a8cf603f6e486663fecd5587924e1ed818db6d54bdcce6"
	I1027 19:01:38.451961  372384 cri.go:89] found id: "71e53e748e01fc8c91ffa4fb8b7865bea26bcbe65dcba958949295c6f0037da7"
	I1027 19:01:38.451964  372384 cri.go:89] found id: "56024f3c5df317e559a2fc01d91706e2a21e755612591d33569756c8b235a739"
	I1027 19:01:38.451967  372384 cri.go:89] found id: "ef768854ff28223563c69a32d2834fab10262b7e6a6963c625600582d59b9e51"
	I1027 19:01:38.451969  372384 cri.go:89] found id: "76e187a2847661d9eb59daefd89617bc458e7238cd87c5b6b4e6c6f1884d4826"
	I1027 19:01:38.451972  372384 cri.go:89] found id: "0c23d9067a021958f6e78dae17e3e314bb8f01a59a277d6d231a1c91ac243402"
	I1027 19:01:38.451981  372384 cri.go:89] found id: "6feb37f12d4a362a4be9862cfb4d525092b27f5c8806b5fe7f3e6992e40865b1"
	I1027 19:01:38.451984  372384 cri.go:89] found id: "2dc898f8fa5b3f56f21afaa0584bf9b0ee67ad474e08c141d382bf6352ffb103"
	I1027 19:01:38.451998  372384 cri.go:89] found id: "27f1c94c3f5736bca109359ef14c6315dca30f3a92e432a313912785f638d339"
	I1027 19:01:38.452000  372384 cri.go:89] found id: "b7494b1ab076bec5211fe9aa45d869fd06dce709b51652f81a21756c0087c5dc"
	I1027 19:01:38.452003  372384 cri.go:89] found id: "2f642c7cbe9094287b843be457ec991af2d6a4e3a7c89d0cef2628b88a0df390"
	I1027 19:01:38.452005  372384 cri.go:89] found id: "ca7a93241189c56d1808a8b7fb428d8057429bed2f6554b65716f5aeecd49b88"
	I1027 19:01:38.452016  372384 cri.go:89] found id: "2095fff76306861533792ed7f54dec0997d67f3656557a857ff7af3b00429cda"
	I1027 19:01:38.452024  372384 cri.go:89] found id: "eede6880efbc9e505b955efd78f6cc85e44d1edb5f142fe3df44034a4341a14f"
	I1027 19:01:38.452028  372384 cri.go:89] found id: "ba1ddd191addfbafb743bfd31989a110bd5b0f58f7479075c129e528745e7798"
	I1027 19:01:38.452031  372384 cri.go:89] found id: "abbe027d3dc3b813b338a56e8cabab82e03eb9b112b7b850abb79fefe6d06ad7"
	I1027 19:01:38.452033  372384 cri.go:89] found id: "12e10d7e88fff07d51f12a561be95b0933cdc57cc59e0f478fe8964c53f1806b"
	I1027 19:01:38.452036  372384 cri.go:89] found id: "6d05a2b6be1fb2b8475a215eb50681a592a20257978b9da0091741666c9fa5c6"
	I1027 19:01:38.452039  372384 cri.go:89] found id: "c02f8fc8e6a7392b824780b7cf27bac4f0cee905aafadcc2295bf2775ce85316"
	I1027 19:01:38.452048  372384 cri.go:89] found id: "95468d8526baeb9ed07c582a77c3593017052fb17f3ce84741a67f91794b7400"
	I1027 19:01:38.452054  372384 cri.go:89] found id: "81cd0a11514aba345e443fd708bb0a4b65a29f336aec8643a57037ceeda8aefe"
	I1027 19:01:38.452066  372384 cri.go:89] found id: "f25d173d59b5ba978f27e915fc30ff6e02ab5bba952c2af598b464a59edc1987"
	I1027 19:01:38.452071  372384 cri.go:89] found id: ""
	I1027 19:01:38.452116  372384 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:01:38.468036  372384 out.go:203] 
	W1027 19:01:38.469353  372384 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:01:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:01:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 19:01:38.469381  372384 out.go:285] * 
	* 
	W1027 19:01:38.473790  372384 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 19:01:38.475506  372384 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-589824 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (148.91s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-vwv62" [caece82b-8cd8-4061-adfa-dcf7d5660841] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003667749s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-589824 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-589824 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (259.380277ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 18:59:29.548516  369285 out.go:360] Setting OutFile to fd 1 ...
	I1027 18:59:29.548759  369285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:59:29.548767  369285 out.go:374] Setting ErrFile to fd 2...
	I1027 18:59:29.548772  369285 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:59:29.549176  369285 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 18:59:29.549705  369285 mustload.go:65] Loading cluster: addons-589824
	I1027 18:59:29.550780  369285 config.go:182] Loaded profile config "addons-589824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:59:29.550805  369285 addons.go:606] checking whether the cluster is paused
	I1027 18:59:29.550935  369285 config.go:182] Loaded profile config "addons-589824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:59:29.550957  369285 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:59:29.551346  369285 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:59:29.570340  369285 ssh_runner.go:195] Run: systemctl --version
	I1027 18:59:29.570396  369285 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:59:29.590264  369285 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:59:29.691255  369285 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 18:59:29.691378  369285 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 18:59:29.721589  369285 cri.go:89] found id: "0a17a4745cc1a6104ea6432d9fd60dac6e6abe764b5d1330d69426fa0b74a6ab"
	I1027 18:59:29.721609  369285 cri.go:89] found id: "a30f678907200483df6ff7630d767bc8daa14ce81d7f9088b61ad45ee3d0afab"
	I1027 18:59:29.721613  369285 cri.go:89] found id: "db7343377b38897cf4a8cf603f6e486663fecd5587924e1ed818db6d54bdcce6"
	I1027 18:59:29.721616  369285 cri.go:89] found id: "71e53e748e01fc8c91ffa4fb8b7865bea26bcbe65dcba958949295c6f0037da7"
	I1027 18:59:29.721618  369285 cri.go:89] found id: "56024f3c5df317e559a2fc01d91706e2a21e755612591d33569756c8b235a739"
	I1027 18:59:29.721653  369285 cri.go:89] found id: "ef768854ff28223563c69a32d2834fab10262b7e6a6963c625600582d59b9e51"
	I1027 18:59:29.721660  369285 cri.go:89] found id: "76e187a2847661d9eb59daefd89617bc458e7238cd87c5b6b4e6c6f1884d4826"
	I1027 18:59:29.721663  369285 cri.go:89] found id: "0c23d9067a021958f6e78dae17e3e314bb8f01a59a277d6d231a1c91ac243402"
	I1027 18:59:29.721665  369285 cri.go:89] found id: "6feb37f12d4a362a4be9862cfb4d525092b27f5c8806b5fe7f3e6992e40865b1"
	I1027 18:59:29.721672  369285 cri.go:89] found id: "2dc898f8fa5b3f56f21afaa0584bf9b0ee67ad474e08c141d382bf6352ffb103"
	I1027 18:59:29.721677  369285 cri.go:89] found id: "27f1c94c3f5736bca109359ef14c6315dca30f3a92e432a313912785f638d339"
	I1027 18:59:29.721680  369285 cri.go:89] found id: "b7494b1ab076bec5211fe9aa45d869fd06dce709b51652f81a21756c0087c5dc"
	I1027 18:59:29.721683  369285 cri.go:89] found id: "2f642c7cbe9094287b843be457ec991af2d6a4e3a7c89d0cef2628b88a0df390"
	I1027 18:59:29.721685  369285 cri.go:89] found id: "ca7a93241189c56d1808a8b7fb428d8057429bed2f6554b65716f5aeecd49b88"
	I1027 18:59:29.721688  369285 cri.go:89] found id: "2095fff76306861533792ed7f54dec0997d67f3656557a857ff7af3b00429cda"
	I1027 18:59:29.721700  369285 cri.go:89] found id: "eede6880efbc9e505b955efd78f6cc85e44d1edb5f142fe3df44034a4341a14f"
	I1027 18:59:29.721709  369285 cri.go:89] found id: "ba1ddd191addfbafb743bfd31989a110bd5b0f58f7479075c129e528745e7798"
	I1027 18:59:29.721715  369285 cri.go:89] found id: "abbe027d3dc3b813b338a56e8cabab82e03eb9b112b7b850abb79fefe6d06ad7"
	I1027 18:59:29.721719  369285 cri.go:89] found id: "12e10d7e88fff07d51f12a561be95b0933cdc57cc59e0f478fe8964c53f1806b"
	I1027 18:59:29.721723  369285 cri.go:89] found id: "6d05a2b6be1fb2b8475a215eb50681a592a20257978b9da0091741666c9fa5c6"
	I1027 18:59:29.721730  369285 cri.go:89] found id: "c02f8fc8e6a7392b824780b7cf27bac4f0cee905aafadcc2295bf2775ce85316"
	I1027 18:59:29.721734  369285 cri.go:89] found id: "95468d8526baeb9ed07c582a77c3593017052fb17f3ce84741a67f91794b7400"
	I1027 18:59:29.721738  369285 cri.go:89] found id: "81cd0a11514aba345e443fd708bb0a4b65a29f336aec8643a57037ceeda8aefe"
	I1027 18:59:29.721745  369285 cri.go:89] found id: "f25d173d59b5ba978f27e915fc30ff6e02ab5bba952c2af598b464a59edc1987"
	I1027 18:59:29.721749  369285 cri.go:89] found id: ""
	I1027 18:59:29.721793  369285 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 18:59:29.737210  369285 out.go:203] 
	W1027 18:59:29.738660  369285 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T18:59:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T18:59:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 18:59:29.738682  369285 out.go:285] * 
	* 
	W1027 18:59:29.742679  369285 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 18:59:29.744112  369285 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-589824 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.26s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.32s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.768135ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-6mqmx" [1a22ca13-4aaa-4ac6-b5ad-df2b9ba87dfc] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003767587s
addons_test.go:463: (dbg) Run:  kubectl --context addons-589824 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-589824 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-589824 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (251.239704ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 18:59:14.698323  367959 out.go:360] Setting OutFile to fd 1 ...
	I1027 18:59:14.698433  367959 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:59:14.698438  367959 out.go:374] Setting ErrFile to fd 2...
	I1027 18:59:14.698442  367959 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:59:14.698628  367959 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 18:59:14.698892  367959 mustload.go:65] Loading cluster: addons-589824
	I1027 18:59:14.699244  367959 config.go:182] Loaded profile config "addons-589824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:59:14.699261  367959 addons.go:606] checking whether the cluster is paused
	I1027 18:59:14.699343  367959 config.go:182] Loaded profile config "addons-589824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:59:14.699362  367959 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:59:14.699732  367959 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:59:14.718062  367959 ssh_runner.go:195] Run: systemctl --version
	I1027 18:59:14.718116  367959 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:59:14.736616  367959 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:59:14.836168  367959 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 18:59:14.836262  367959 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 18:59:14.865694  367959 cri.go:89] found id: "0a17a4745cc1a6104ea6432d9fd60dac6e6abe764b5d1330d69426fa0b74a6ab"
	I1027 18:59:14.865715  367959 cri.go:89] found id: "a30f678907200483df6ff7630d767bc8daa14ce81d7f9088b61ad45ee3d0afab"
	I1027 18:59:14.865719  367959 cri.go:89] found id: "db7343377b38897cf4a8cf603f6e486663fecd5587924e1ed818db6d54bdcce6"
	I1027 18:59:14.865722  367959 cri.go:89] found id: "71e53e748e01fc8c91ffa4fb8b7865bea26bcbe65dcba958949295c6f0037da7"
	I1027 18:59:14.865724  367959 cri.go:89] found id: "56024f3c5df317e559a2fc01d91706e2a21e755612591d33569756c8b235a739"
	I1027 18:59:14.865727  367959 cri.go:89] found id: "ef768854ff28223563c69a32d2834fab10262b7e6a6963c625600582d59b9e51"
	I1027 18:59:14.865730  367959 cri.go:89] found id: "76e187a2847661d9eb59daefd89617bc458e7238cd87c5b6b4e6c6f1884d4826"
	I1027 18:59:14.865732  367959 cri.go:89] found id: "0c23d9067a021958f6e78dae17e3e314bb8f01a59a277d6d231a1c91ac243402"
	I1027 18:59:14.865735  367959 cri.go:89] found id: "6feb37f12d4a362a4be9862cfb4d525092b27f5c8806b5fe7f3e6992e40865b1"
	I1027 18:59:14.865740  367959 cri.go:89] found id: "2dc898f8fa5b3f56f21afaa0584bf9b0ee67ad474e08c141d382bf6352ffb103"
	I1027 18:59:14.865747  367959 cri.go:89] found id: "27f1c94c3f5736bca109359ef14c6315dca30f3a92e432a313912785f638d339"
	I1027 18:59:14.865754  367959 cri.go:89] found id: "b7494b1ab076bec5211fe9aa45d869fd06dce709b51652f81a21756c0087c5dc"
	I1027 18:59:14.865757  367959 cri.go:89] found id: "2f642c7cbe9094287b843be457ec991af2d6a4e3a7c89d0cef2628b88a0df390"
	I1027 18:59:14.865759  367959 cri.go:89] found id: "ca7a93241189c56d1808a8b7fb428d8057429bed2f6554b65716f5aeecd49b88"
	I1027 18:59:14.865762  367959 cri.go:89] found id: "2095fff76306861533792ed7f54dec0997d67f3656557a857ff7af3b00429cda"
	I1027 18:59:14.865766  367959 cri.go:89] found id: "eede6880efbc9e505b955efd78f6cc85e44d1edb5f142fe3df44034a4341a14f"
	I1027 18:59:14.865769  367959 cri.go:89] found id: "ba1ddd191addfbafb743bfd31989a110bd5b0f58f7479075c129e528745e7798"
	I1027 18:59:14.865773  367959 cri.go:89] found id: "abbe027d3dc3b813b338a56e8cabab82e03eb9b112b7b850abb79fefe6d06ad7"
	I1027 18:59:14.865775  367959 cri.go:89] found id: "12e10d7e88fff07d51f12a561be95b0933cdc57cc59e0f478fe8964c53f1806b"
	I1027 18:59:14.865777  367959 cri.go:89] found id: "6d05a2b6be1fb2b8475a215eb50681a592a20257978b9da0091741666c9fa5c6"
	I1027 18:59:14.865782  367959 cri.go:89] found id: "c02f8fc8e6a7392b824780b7cf27bac4f0cee905aafadcc2295bf2775ce85316"
	I1027 18:59:14.865784  367959 cri.go:89] found id: "95468d8526baeb9ed07c582a77c3593017052fb17f3ce84741a67f91794b7400"
	I1027 18:59:14.865787  367959 cri.go:89] found id: "81cd0a11514aba345e443fd708bb0a4b65a29f336aec8643a57037ceeda8aefe"
	I1027 18:59:14.865790  367959 cri.go:89] found id: "f25d173d59b5ba978f27e915fc30ff6e02ab5bba952c2af598b464a59edc1987"
	I1027 18:59:14.865792  367959 cri.go:89] found id: ""
	I1027 18:59:14.865832  367959 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 18:59:14.880631  367959 out.go:203] 
	W1027 18:59:14.882037  367959 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T18:59:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T18:59:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 18:59:14.882056  367959 out.go:285] * 
	* 
	W1027 18:59:14.886040  367959 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 18:59:14.887417  367959 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-589824 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.32s)

                                                
                                    
x
+
TestAddons/parallel/CSI (32.29s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1027 18:59:27.903939  356415 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1027 18:59:27.907551  356415 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1027 18:59:27.907583  356415 kapi.go:107] duration metric: took 3.649016ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.662622ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-589824 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-589824 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-589824 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-589824 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-589824 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-589824 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [c08aa01e-95a9-4bda-bb06-921b4564eaa1] Pending
helpers_test.go:352: "task-pv-pod" [c08aa01e-95a9-4bda-bb06-921b4564eaa1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [c08aa01e-95a9-4bda-bb06-921b4564eaa1] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.004426768s
addons_test.go:572: (dbg) Run:  kubectl --context addons-589824 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-589824 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-589824 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-589824 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-589824 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-589824 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-589824 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-589824 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-589824 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-589824 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-589824 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-589824 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-589824 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-589824 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-589824 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [006cb30b-6c67-4c7e-a263-3060e06b2ad3] Pending
helpers_test.go:352: "task-pv-pod-restore" [006cb30b-6c67-4c7e-a263-3060e06b2ad3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [006cb30b-6c67-4c7e-a263-3060e06b2ad3] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004778534s
addons_test.go:614: (dbg) Run:  kubectl --context addons-589824 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-589824 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-589824 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-589824 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-589824 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (255.660769ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 18:59:59.729690  370155 out.go:360] Setting OutFile to fd 1 ...
	I1027 18:59:59.729994  370155 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:59:59.730005  370155 out.go:374] Setting ErrFile to fd 2...
	I1027 18:59:59.730009  370155 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:59:59.730263  370155 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 18:59:59.730609  370155 mustload.go:65] Loading cluster: addons-589824
	I1027 18:59:59.731026  370155 config.go:182] Loaded profile config "addons-589824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:59:59.731047  370155 addons.go:606] checking whether the cluster is paused
	I1027 18:59:59.731168  370155 config.go:182] Loaded profile config "addons-589824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:59:59.731188  370155 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:59:59.731671  370155 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:59:59.750631  370155 ssh_runner.go:195] Run: systemctl --version
	I1027 18:59:59.750695  370155 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:59:59.768658  370155 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:59:59.869119  370155 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 18:59:59.869244  370155 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 18:59:59.899656  370155 cri.go:89] found id: "0a17a4745cc1a6104ea6432d9fd60dac6e6abe764b5d1330d69426fa0b74a6ab"
	I1027 18:59:59.899680  370155 cri.go:89] found id: "a30f678907200483df6ff7630d767bc8daa14ce81d7f9088b61ad45ee3d0afab"
	I1027 18:59:59.899683  370155 cri.go:89] found id: "db7343377b38897cf4a8cf603f6e486663fecd5587924e1ed818db6d54bdcce6"
	I1027 18:59:59.899687  370155 cri.go:89] found id: "71e53e748e01fc8c91ffa4fb8b7865bea26bcbe65dcba958949295c6f0037da7"
	I1027 18:59:59.899691  370155 cri.go:89] found id: "56024f3c5df317e559a2fc01d91706e2a21e755612591d33569756c8b235a739"
	I1027 18:59:59.899695  370155 cri.go:89] found id: "ef768854ff28223563c69a32d2834fab10262b7e6a6963c625600582d59b9e51"
	I1027 18:59:59.899698  370155 cri.go:89] found id: "76e187a2847661d9eb59daefd89617bc458e7238cd87c5b6b4e6c6f1884d4826"
	I1027 18:59:59.899701  370155 cri.go:89] found id: "0c23d9067a021958f6e78dae17e3e314bb8f01a59a277d6d231a1c91ac243402"
	I1027 18:59:59.899704  370155 cri.go:89] found id: "6feb37f12d4a362a4be9862cfb4d525092b27f5c8806b5fe7f3e6992e40865b1"
	I1027 18:59:59.899709  370155 cri.go:89] found id: "2dc898f8fa5b3f56f21afaa0584bf9b0ee67ad474e08c141d382bf6352ffb103"
	I1027 18:59:59.899712  370155 cri.go:89] found id: "27f1c94c3f5736bca109359ef14c6315dca30f3a92e432a313912785f638d339"
	I1027 18:59:59.899714  370155 cri.go:89] found id: "b7494b1ab076bec5211fe9aa45d869fd06dce709b51652f81a21756c0087c5dc"
	I1027 18:59:59.899716  370155 cri.go:89] found id: "2f642c7cbe9094287b843be457ec991af2d6a4e3a7c89d0cef2628b88a0df390"
	I1027 18:59:59.899728  370155 cri.go:89] found id: "ca7a93241189c56d1808a8b7fb428d8057429bed2f6554b65716f5aeecd49b88"
	I1027 18:59:59.899734  370155 cri.go:89] found id: "2095fff76306861533792ed7f54dec0997d67f3656557a857ff7af3b00429cda"
	I1027 18:59:59.899738  370155 cri.go:89] found id: "eede6880efbc9e505b955efd78f6cc85e44d1edb5f142fe3df44034a4341a14f"
	I1027 18:59:59.899744  370155 cri.go:89] found id: "ba1ddd191addfbafb743bfd31989a110bd5b0f58f7479075c129e528745e7798"
	I1027 18:59:59.899748  370155 cri.go:89] found id: "abbe027d3dc3b813b338a56e8cabab82e03eb9b112b7b850abb79fefe6d06ad7"
	I1027 18:59:59.899751  370155 cri.go:89] found id: "12e10d7e88fff07d51f12a561be95b0933cdc57cc59e0f478fe8964c53f1806b"
	I1027 18:59:59.899753  370155 cri.go:89] found id: "6d05a2b6be1fb2b8475a215eb50681a592a20257978b9da0091741666c9fa5c6"
	I1027 18:59:59.899755  370155 cri.go:89] found id: "c02f8fc8e6a7392b824780b7cf27bac4f0cee905aafadcc2295bf2775ce85316"
	I1027 18:59:59.899758  370155 cri.go:89] found id: "95468d8526baeb9ed07c582a77c3593017052fb17f3ce84741a67f91794b7400"
	I1027 18:59:59.899761  370155 cri.go:89] found id: "81cd0a11514aba345e443fd708bb0a4b65a29f336aec8643a57037ceeda8aefe"
	I1027 18:59:59.899763  370155 cri.go:89] found id: "f25d173d59b5ba978f27e915fc30ff6e02ab5bba952c2af598b464a59edc1987"
	I1027 18:59:59.899766  370155 cri.go:89] found id: ""
	I1027 18:59:59.899807  370155 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 18:59:59.915314  370155 out.go:203] 
	W1027 18:59:59.916891  370155 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T18:59:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T18:59:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 18:59:59.916919  370155 out.go:285] * 
	* 
	W1027 18:59:59.920932  370155 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 18:59:59.922364  370155 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-589824 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-589824 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-589824 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (260.609336ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 18:59:59.987167  370218 out.go:360] Setting OutFile to fd 1 ...
	I1027 18:59:59.987420  370218 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:59:59.987428  370218 out.go:374] Setting ErrFile to fd 2...
	I1027 18:59:59.987433  370218 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:59:59.987666  370218 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 18:59:59.987934  370218 mustload.go:65] Loading cluster: addons-589824
	I1027 18:59:59.988297  370218 config.go:182] Loaded profile config "addons-589824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:59:59.988314  370218 addons.go:606] checking whether the cluster is paused
	I1027 18:59:59.988395  370218 config.go:182] Loaded profile config "addons-589824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:59:59.988412  370218 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:59:59.988796  370218 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 19:00:00.007912  370218 ssh_runner.go:195] Run: systemctl --version
	I1027 19:00:00.007977  370218 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 19:00:00.027085  370218 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 19:00:00.128194  370218 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 19:00:00.128299  370218 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 19:00:00.159343  370218 cri.go:89] found id: "0a17a4745cc1a6104ea6432d9fd60dac6e6abe764b5d1330d69426fa0b74a6ab"
	I1027 19:00:00.159366  370218 cri.go:89] found id: "a30f678907200483df6ff7630d767bc8daa14ce81d7f9088b61ad45ee3d0afab"
	I1027 19:00:00.159370  370218 cri.go:89] found id: "db7343377b38897cf4a8cf603f6e486663fecd5587924e1ed818db6d54bdcce6"
	I1027 19:00:00.159373  370218 cri.go:89] found id: "71e53e748e01fc8c91ffa4fb8b7865bea26bcbe65dcba958949295c6f0037da7"
	I1027 19:00:00.159376  370218 cri.go:89] found id: "56024f3c5df317e559a2fc01d91706e2a21e755612591d33569756c8b235a739"
	I1027 19:00:00.159379  370218 cri.go:89] found id: "ef768854ff28223563c69a32d2834fab10262b7e6a6963c625600582d59b9e51"
	I1027 19:00:00.159382  370218 cri.go:89] found id: "76e187a2847661d9eb59daefd89617bc458e7238cd87c5b6b4e6c6f1884d4826"
	I1027 19:00:00.159384  370218 cri.go:89] found id: "0c23d9067a021958f6e78dae17e3e314bb8f01a59a277d6d231a1c91ac243402"
	I1027 19:00:00.159387  370218 cri.go:89] found id: "6feb37f12d4a362a4be9862cfb4d525092b27f5c8806b5fe7f3e6992e40865b1"
	I1027 19:00:00.159393  370218 cri.go:89] found id: "2dc898f8fa5b3f56f21afaa0584bf9b0ee67ad474e08c141d382bf6352ffb103"
	I1027 19:00:00.159395  370218 cri.go:89] found id: "27f1c94c3f5736bca109359ef14c6315dca30f3a92e432a313912785f638d339"
	I1027 19:00:00.159398  370218 cri.go:89] found id: "b7494b1ab076bec5211fe9aa45d869fd06dce709b51652f81a21756c0087c5dc"
	I1027 19:00:00.159400  370218 cri.go:89] found id: "2f642c7cbe9094287b843be457ec991af2d6a4e3a7c89d0cef2628b88a0df390"
	I1027 19:00:00.159403  370218 cri.go:89] found id: "ca7a93241189c56d1808a8b7fb428d8057429bed2f6554b65716f5aeecd49b88"
	I1027 19:00:00.159407  370218 cri.go:89] found id: "2095fff76306861533792ed7f54dec0997d67f3656557a857ff7af3b00429cda"
	I1027 19:00:00.159416  370218 cri.go:89] found id: "eede6880efbc9e505b955efd78f6cc85e44d1edb5f142fe3df44034a4341a14f"
	I1027 19:00:00.159420  370218 cri.go:89] found id: "ba1ddd191addfbafb743bfd31989a110bd5b0f58f7479075c129e528745e7798"
	I1027 19:00:00.159426  370218 cri.go:89] found id: "abbe027d3dc3b813b338a56e8cabab82e03eb9b112b7b850abb79fefe6d06ad7"
	I1027 19:00:00.159430  370218 cri.go:89] found id: "12e10d7e88fff07d51f12a561be95b0933cdc57cc59e0f478fe8964c53f1806b"
	I1027 19:00:00.159434  370218 cri.go:89] found id: "6d05a2b6be1fb2b8475a215eb50681a592a20257978b9da0091741666c9fa5c6"
	I1027 19:00:00.159438  370218 cri.go:89] found id: "c02f8fc8e6a7392b824780b7cf27bac4f0cee905aafadcc2295bf2775ce85316"
	I1027 19:00:00.159442  370218 cri.go:89] found id: "95468d8526baeb9ed07c582a77c3593017052fb17f3ce84741a67f91794b7400"
	I1027 19:00:00.159447  370218 cri.go:89] found id: "81cd0a11514aba345e443fd708bb0a4b65a29f336aec8643a57037ceeda8aefe"
	I1027 19:00:00.159453  370218 cri.go:89] found id: "f25d173d59b5ba978f27e915fc30ff6e02ab5bba952c2af598b464a59edc1987"
	I1027 19:00:00.159456  370218 cri.go:89] found id: ""
	I1027 19:00:00.159500  370218 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:00:00.175430  370218 out.go:203] 
	W1027 19:00:00.176893  370218 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:00:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:00:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 19:00:00.176915  370218 out.go:285] * 
	* 
	W1027 19:00:00.180990  370218 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 19:00:00.182721  370218 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-589824 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (32.29s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.96s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-589824 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-589824 --alsologtostderr -v=1: exit status 11 (278.78981ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 18:59:09.640501  366354 out.go:360] Setting OutFile to fd 1 ...
	I1027 18:59:09.640844  366354 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:59:09.640857  366354 out.go:374] Setting ErrFile to fd 2...
	I1027 18:59:09.640864  366354 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:59:09.641194  366354 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 18:59:09.641571  366354 mustload.go:65] Loading cluster: addons-589824
	I1027 18:59:09.641943  366354 config.go:182] Loaded profile config "addons-589824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:59:09.641961  366354 addons.go:606] checking whether the cluster is paused
	I1027 18:59:09.642040  366354 config.go:182] Loaded profile config "addons-589824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:59:09.642052  366354 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:59:09.642527  366354 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:59:09.662241  366354 ssh_runner.go:195] Run: systemctl --version
	I1027 18:59:09.662317  366354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:59:09.681751  366354 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:59:09.783503  366354 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 18:59:09.783580  366354 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 18:59:09.819910  366354 cri.go:89] found id: "0a17a4745cc1a6104ea6432d9fd60dac6e6abe764b5d1330d69426fa0b74a6ab"
	I1027 18:59:09.819938  366354 cri.go:89] found id: "a30f678907200483df6ff7630d767bc8daa14ce81d7f9088b61ad45ee3d0afab"
	I1027 18:59:09.819945  366354 cri.go:89] found id: "db7343377b38897cf4a8cf603f6e486663fecd5587924e1ed818db6d54bdcce6"
	I1027 18:59:09.819951  366354 cri.go:89] found id: "71e53e748e01fc8c91ffa4fb8b7865bea26bcbe65dcba958949295c6f0037da7"
	I1027 18:59:09.819955  366354 cri.go:89] found id: "56024f3c5df317e559a2fc01d91706e2a21e755612591d33569756c8b235a739"
	I1027 18:59:09.819962  366354 cri.go:89] found id: "ef768854ff28223563c69a32d2834fab10262b7e6a6963c625600582d59b9e51"
	I1027 18:59:09.819967  366354 cri.go:89] found id: "76e187a2847661d9eb59daefd89617bc458e7238cd87c5b6b4e6c6f1884d4826"
	I1027 18:59:09.819971  366354 cri.go:89] found id: "0c23d9067a021958f6e78dae17e3e314bb8f01a59a277d6d231a1c91ac243402"
	I1027 18:59:09.819975  366354 cri.go:89] found id: "6feb37f12d4a362a4be9862cfb4d525092b27f5c8806b5fe7f3e6992e40865b1"
	I1027 18:59:09.819990  366354 cri.go:89] found id: "2dc898f8fa5b3f56f21afaa0584bf9b0ee67ad474e08c141d382bf6352ffb103"
	I1027 18:59:09.819998  366354 cri.go:89] found id: "27f1c94c3f5736bca109359ef14c6315dca30f3a92e432a313912785f638d339"
	I1027 18:59:09.820002  366354 cri.go:89] found id: "b7494b1ab076bec5211fe9aa45d869fd06dce709b51652f81a21756c0087c5dc"
	I1027 18:59:09.820010  366354 cri.go:89] found id: "2f642c7cbe9094287b843be457ec991af2d6a4e3a7c89d0cef2628b88a0df390"
	I1027 18:59:09.820015  366354 cri.go:89] found id: "ca7a93241189c56d1808a8b7fb428d8057429bed2f6554b65716f5aeecd49b88"
	I1027 18:59:09.820022  366354 cri.go:89] found id: "2095fff76306861533792ed7f54dec0997d67f3656557a857ff7af3b00429cda"
	I1027 18:59:09.820028  366354 cri.go:89] found id: "eede6880efbc9e505b955efd78f6cc85e44d1edb5f142fe3df44034a4341a14f"
	I1027 18:59:09.820035  366354 cri.go:89] found id: "ba1ddd191addfbafb743bfd31989a110bd5b0f58f7479075c129e528745e7798"
	I1027 18:59:09.820042  366354 cri.go:89] found id: "abbe027d3dc3b813b338a56e8cabab82e03eb9b112b7b850abb79fefe6d06ad7"
	I1027 18:59:09.820046  366354 cri.go:89] found id: "12e10d7e88fff07d51f12a561be95b0933cdc57cc59e0f478fe8964c53f1806b"
	I1027 18:59:09.820050  366354 cri.go:89] found id: "6d05a2b6be1fb2b8475a215eb50681a592a20257978b9da0091741666c9fa5c6"
	I1027 18:59:09.820054  366354 cri.go:89] found id: "c02f8fc8e6a7392b824780b7cf27bac4f0cee905aafadcc2295bf2775ce85316"
	I1027 18:59:09.820058  366354 cri.go:89] found id: "95468d8526baeb9ed07c582a77c3593017052fb17f3ce84741a67f91794b7400"
	I1027 18:59:09.820062  366354 cri.go:89] found id: "81cd0a11514aba345e443fd708bb0a4b65a29f336aec8643a57037ceeda8aefe"
	I1027 18:59:09.820066  366354 cri.go:89] found id: "f25d173d59b5ba978f27e915fc30ff6e02ab5bba952c2af598b464a59edc1987"
	I1027 18:59:09.820078  366354 cri.go:89] found id: ""
	I1027 18:59:09.820130  366354 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 18:59:09.838024  366354 out.go:203] 
	W1027 18:59:09.839420  366354 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T18:59:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T18:59:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 18:59:09.839446  366354 out.go:285] * 
	* 
	W1027 18:59:09.844150  366354 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 18:59:09.845864  366354 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-589824 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-589824
helpers_test.go:243: (dbg) docker inspect addons-589824:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e8c54cb73f3e55728ce78fff23ac7684832dac9f004ce7ccac5dd5b0c7d3b97",
	        "Created": "2025-10-27T18:56:51.416282482Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 358388,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T18:56:51.459857133Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/5e8c54cb73f3e55728ce78fff23ac7684832dac9f004ce7ccac5dd5b0c7d3b97/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e8c54cb73f3e55728ce78fff23ac7684832dac9f004ce7ccac5dd5b0c7d3b97/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e8c54cb73f3e55728ce78fff23ac7684832dac9f004ce7ccac5dd5b0c7d3b97/hosts",
	        "LogPath": "/var/lib/docker/containers/5e8c54cb73f3e55728ce78fff23ac7684832dac9f004ce7ccac5dd5b0c7d3b97/5e8c54cb73f3e55728ce78fff23ac7684832dac9f004ce7ccac5dd5b0c7d3b97-json.log",
	        "Name": "/addons-589824",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-589824:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-589824",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5e8c54cb73f3e55728ce78fff23ac7684832dac9f004ce7ccac5dd5b0c7d3b97",
	                "LowerDir": "/var/lib/docker/overlay2/7a1c62e1076931169f4e0035676ea65cefb8158f580ae1df1de805bd9d2f5b0e-init/diff:/var/lib/docker/overlay2/71b61ec94610a35f2d924dec358052d4c154c36b3fe219802f60246ca2dc7f45/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7a1c62e1076931169f4e0035676ea65cefb8158f580ae1df1de805bd9d2f5b0e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7a1c62e1076931169f4e0035676ea65cefb8158f580ae1df1de805bd9d2f5b0e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7a1c62e1076931169f4e0035676ea65cefb8158f580ae1df1de805bd9d2f5b0e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-589824",
	                "Source": "/var/lib/docker/volumes/addons-589824/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-589824",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-589824",
	                "name.minikube.sigs.k8s.io": "addons-589824",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1734ac03b580fc3c16a76cfde6d1b73cbf9f1cc3cf72fde094a751e347b7a8f2",
	            "SandboxKey": "/var/run/docker/netns/1734ac03b580",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-589824": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:dd:e6:c9:41:47",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c1d8cd130a9fb2cf4b671833f0a9d4c3a761289bf1eb7fb6eccc22d089789656",
	                    "EndpointID": "a98e1ca98456234c857bf29aa3881b3e59fdeea16a1f3a385e5d07683786423f",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-589824",
	                        "5e8c54cb73f3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-589824 -n addons-589824
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-589824 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-589824 logs -n 25: (1.285910394s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-515117 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-515117   │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:56 UTC │
	│ delete  │ -p download-only-515117                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-515117   │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:56 UTC │
	│ start   │ -o=json --download-only -p download-only-339078 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-339078   │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:56 UTC │
	│ delete  │ -p download-only-339078                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-339078   │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:56 UTC │
	│ delete  │ -p download-only-515117                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-515117   │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:56 UTC │
	│ delete  │ -p download-only-339078                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-339078   │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:56 UTC │
	│ start   │ --download-only -p download-docker-738250 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-738250 │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │                     │
	│ delete  │ -p download-docker-738250                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-738250 │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:56 UTC │
	│ start   │ --download-only -p binary-mirror-394940 --alsologtostderr --binary-mirror http://127.0.0.1:39569 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-394940   │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │                     │
	│ delete  │ -p binary-mirror-394940                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-394940   │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:56 UTC │
	│ addons  │ enable dashboard -p addons-589824                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-589824          │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │                     │
	│ addons  │ disable dashboard -p addons-589824                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-589824          │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │                     │
	│ start   │ -p addons-589824 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-589824          │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:59 UTC │
	│ addons  │ addons-589824 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-589824          │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │                     │
	│ addons  │ addons-589824 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-589824          │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │                     │
	│ addons  │ enable headlamp -p addons-589824 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-589824          │ jenkins │ v1.37.0 │ 27 Oct 25 18:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 18:56:27
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 18:56:27.976251  357750 out.go:360] Setting OutFile to fd 1 ...
	I1027 18:56:27.976510  357750 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:56:27.976519  357750 out.go:374] Setting ErrFile to fd 2...
	I1027 18:56:27.976523  357750 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:56:27.976745  357750 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 18:56:27.977380  357750 out.go:368] Setting JSON to false
	I1027 18:56:27.978365  357750 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5937,"bootTime":1761585451,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 18:56:27.978492  357750 start.go:141] virtualization: kvm guest
	I1027 18:56:27.980773  357750 out.go:179] * [addons-589824] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 18:56:27.982595  357750 notify.go:220] Checking for updates...
	I1027 18:56:27.982657  357750 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 18:56:27.984498  357750 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 18:56:27.986301  357750 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 18:56:27.988002  357750 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-352833/.minikube
	I1027 18:56:27.989590  357750 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 18:56:27.991298  357750 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 18:56:27.992936  357750 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 18:56:28.019056  357750 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 18:56:28.019217  357750 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 18:56:28.081197  357750 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-27 18:56:28.069443711 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 18:56:28.081316  357750 docker.go:318] overlay module found
	I1027 18:56:28.083328  357750 out.go:179] * Using the docker driver based on user configuration
	I1027 18:56:28.084803  357750 start.go:305] selected driver: docker
	I1027 18:56:28.084825  357750 start.go:925] validating driver "docker" against <nil>
	I1027 18:56:28.084840  357750 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 18:56:28.085479  357750 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 18:56:28.142806  357750 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-27 18:56:28.131806595 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 18:56:28.143012  357750 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1027 18:56:28.143307  357750 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 18:56:28.145181  357750 out.go:179] * Using Docker driver with root privileges
	I1027 18:56:28.146426  357750 cni.go:84] Creating CNI manager for ""
	I1027 18:56:28.146526  357750 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 18:56:28.146543  357750 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 18:56:28.146629  357750 start.go:349] cluster config:
	{Name:addons-589824 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-589824 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1027 18:56:28.148061  357750 out.go:179] * Starting "addons-589824" primary control-plane node in "addons-589824" cluster
	I1027 18:56:28.149190  357750 cache.go:123] Beginning downloading kic base image for docker with crio
	I1027 18:56:28.150597  357750 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 18:56:28.151677  357750 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 18:56:28.151752  357750 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1027 18:56:28.151768  357750 cache.go:58] Caching tarball of preloaded images
	I1027 18:56:28.151807  357750 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 18:56:28.151888  357750 preload.go:233] Found /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1027 18:56:28.151901  357750 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 18:56:28.152336  357750 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/config.json ...
	I1027 18:56:28.152375  357750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/config.json: {Name:mk83a19f7e07d3485c6fbc0c6bc6309f2d56d02c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:28.170858  357750 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1027 18:56:28.171022  357750 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1027 18:56:28.171043  357750 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1027 18:56:28.171050  357750 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1027 18:56:28.171057  357750 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1027 18:56:28.171064  357750 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1027 18:56:40.152523  357750 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1027 18:56:40.152564  357750 cache.go:232] Successfully downloaded all kic artifacts
	I1027 18:56:40.152636  357750 start.go:360] acquireMachinesLock for addons-589824: {Name:mk5322ac57c0e3174bcd3aab61f07a516429abf5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 18:56:40.152775  357750 start.go:364] duration metric: took 108.825µs to acquireMachinesLock for "addons-589824"
	I1027 18:56:40.152811  357750 start.go:93] Provisioning new machine with config: &{Name:addons-589824 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-589824 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 18:56:40.152927  357750 start.go:125] createHost starting for "" (driver="docker")
	I1027 18:56:40.155179  357750 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1027 18:56:40.155479  357750 start.go:159] libmachine.API.Create for "addons-589824" (driver="docker")
	I1027 18:56:40.155521  357750 client.go:168] LocalClient.Create starting
	I1027 18:56:40.155691  357750 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem
	I1027 18:56:40.271089  357750 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem
	I1027 18:56:40.549554  357750 cli_runner.go:164] Run: docker network inspect addons-589824 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 18:56:40.568432  357750 cli_runner.go:211] docker network inspect addons-589824 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 18:56:40.568619  357750 network_create.go:284] running [docker network inspect addons-589824] to gather additional debugging logs...
	I1027 18:56:40.568660  357750 cli_runner.go:164] Run: docker network inspect addons-589824
	W1027 18:56:40.587026  357750 cli_runner.go:211] docker network inspect addons-589824 returned with exit code 1
	I1027 18:56:40.587063  357750 network_create.go:287] error running [docker network inspect addons-589824]: docker network inspect addons-589824: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-589824 not found
	I1027 18:56:40.587097  357750 network_create.go:289] output of [docker network inspect addons-589824]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-589824 not found
	
	** /stderr **
	I1027 18:56:40.587261  357750 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 18:56:40.606459  357750 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002018e80}
	I1027 18:56:40.606499  357750 network_create.go:124] attempt to create docker network addons-589824 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1027 18:56:40.606549  357750 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-589824 addons-589824
	I1027 18:56:40.666682  357750 network_create.go:108] docker network addons-589824 192.168.49.0/24 created
	I1027 18:56:40.666738  357750 kic.go:121] calculated static IP "192.168.49.2" for the "addons-589824" container
	I1027 18:56:40.666843  357750 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 18:56:40.683809  357750 cli_runner.go:164] Run: docker volume create addons-589824 --label name.minikube.sigs.k8s.io=addons-589824 --label created_by.minikube.sigs.k8s.io=true
	I1027 18:56:40.704325  357750 oci.go:103] Successfully created a docker volume addons-589824
	I1027 18:56:40.704419  357750 cli_runner.go:164] Run: docker run --rm --name addons-589824-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-589824 --entrypoint /usr/bin/test -v addons-589824:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 18:56:46.988977  357750 cli_runner.go:217] Completed: docker run --rm --name addons-589824-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-589824 --entrypoint /usr/bin/test -v addons-589824:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (6.284496545s)
	I1027 18:56:46.989016  357750 oci.go:107] Successfully prepared a docker volume addons-589824
	I1027 18:56:46.989049  357750 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 18:56:46.989077  357750 kic.go:194] Starting extracting preloaded images to volume ...
	I1027 18:56:46.989155  357750 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-589824:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1027 18:56:51.340410  357750 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-589824:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.351201594s)
	I1027 18:56:51.340459  357750 kic.go:203] duration metric: took 4.351378042s to extract preloaded images to volume ...
	W1027 18:56:51.340557  357750 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1027 18:56:51.340590  357750 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1027 18:56:51.340634  357750 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1027 18:56:51.399273  357750 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-589824 --name addons-589824 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-589824 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-589824 --network addons-589824 --ip 192.168.49.2 --volume addons-589824:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1027 18:56:51.685323  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Running}}
	I1027 18:56:51.704229  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:56:51.723106  357750 cli_runner.go:164] Run: docker exec addons-589824 stat /var/lib/dpkg/alternatives/iptables
	I1027 18:56:51.775121  357750 oci.go:144] the created container "addons-589824" has a running status.
	I1027 18:56:51.775161  357750 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa...
	I1027 18:56:52.482837  357750 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1027 18:56:52.509363  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:56:52.528091  357750 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1027 18:56:52.528115  357750 kic_runner.go:114] Args: [docker exec --privileged addons-589824 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1027 18:56:52.579991  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:56:52.598419  357750 machine.go:93] provisionDockerMachine start ...
	I1027 18:56:52.598547  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:56:52.617245  357750 main.go:141] libmachine: Using SSH client type: native
	I1027 18:56:52.617589  357750 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1027 18:56:52.617610  357750 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 18:56:52.760584  357750 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-589824
	
	I1027 18:56:52.760616  357750 ubuntu.go:182] provisioning hostname "addons-589824"
	I1027 18:56:52.760684  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:56:52.779752  357750 main.go:141] libmachine: Using SSH client type: native
	I1027 18:56:52.780051  357750 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1027 18:56:52.780074  357750 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-589824 && echo "addons-589824" | sudo tee /etc/hostname
	I1027 18:56:52.933129  357750 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-589824
	
	I1027 18:56:52.933224  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:56:52.951396  357750 main.go:141] libmachine: Using SSH client type: native
	I1027 18:56:52.951622  357750 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1027 18:56:52.951640  357750 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-589824' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-589824/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-589824' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 18:56:53.094170  357750 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 18:56:53.094204  357750 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21801-352833/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-352833/.minikube}
	I1027 18:56:53.094308  357750 ubuntu.go:190] setting up certificates
	I1027 18:56:53.094327  357750 provision.go:84] configureAuth start
	I1027 18:56:53.094397  357750 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-589824
	I1027 18:56:53.113130  357750 provision.go:143] copyHostCerts
	I1027 18:56:53.113230  357750 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem (1078 bytes)
	I1027 18:56:53.113362  357750 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem (1123 bytes)
	I1027 18:56:53.113425  357750 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem (1679 bytes)
	I1027 18:56:53.113481  357750 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem org=jenkins.addons-589824 san=[127.0.0.1 192.168.49.2 addons-589824 localhost minikube]
	I1027 18:56:53.306978  357750 provision.go:177] copyRemoteCerts
	I1027 18:56:53.307052  357750 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 18:56:53.307091  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:56:53.326763  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:56:53.430205  357750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 18:56:53.450230  357750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1027 18:56:53.467768  357750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 18:56:53.486406  357750 provision.go:87] duration metric: took 392.059607ms to configureAuth
	I1027 18:56:53.486438  357750 ubuntu.go:206] setting minikube options for container-runtime
	I1027 18:56:53.486604  357750 config.go:182] Loaded profile config "addons-589824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:56:53.486704  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:56:53.504933  357750 main.go:141] libmachine: Using SSH client type: native
	I1027 18:56:53.505191  357750 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33140 <nil> <nil>}
	I1027 18:56:53.505211  357750 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 18:56:53.762641  357750 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 18:56:53.762670  357750 machine.go:96] duration metric: took 1.164205012s to provisionDockerMachine
	I1027 18:56:53.762684  357750 client.go:171] duration metric: took 13.607151259s to LocalClient.Create
	I1027 18:56:53.762709  357750 start.go:167] duration metric: took 13.607231373s to libmachine.API.Create "addons-589824"
	I1027 18:56:53.762719  357750 start.go:293] postStartSetup for "addons-589824" (driver="docker")
	I1027 18:56:53.762731  357750 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 18:56:53.762790  357750 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 18:56:53.762830  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:56:53.781365  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:56:53.885330  357750 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 18:56:53.889373  357750 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 18:56:53.889404  357750 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 18:56:53.889417  357750 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-352833/.minikube/addons for local assets ...
	I1027 18:56:53.889476  357750 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-352833/.minikube/files for local assets ...
	I1027 18:56:53.889499  357750 start.go:296] duration metric: took 126.774101ms for postStartSetup
	I1027 18:56:53.889796  357750 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-589824
	I1027 18:56:53.908118  357750 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/config.json ...
	I1027 18:56:53.908437  357750 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 18:56:53.908484  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:56:53.926041  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:56:54.024696  357750 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 18:56:54.029638  357750 start.go:128] duration metric: took 13.876689788s to createHost
	I1027 18:56:54.029730  357750 start.go:83] releasing machines lock for "addons-589824", held for 13.876878809s
	I1027 18:56:54.029835  357750 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-589824
	I1027 18:56:54.047873  357750 ssh_runner.go:195] Run: cat /version.json
	I1027 18:56:54.047905  357750 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 18:56:54.047923  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:56:54.048001  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:56:54.067393  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:56:54.067666  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:56:54.219891  357750 ssh_runner.go:195] Run: systemctl --version
	I1027 18:56:54.227001  357750 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 18:56:54.265699  357750 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 18:56:54.270727  357750 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 18:56:54.270808  357750 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 18:56:54.299409  357750 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1027 18:56:54.299437  357750 start.go:495] detecting cgroup driver to use...
	I1027 18:56:54.299475  357750 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 18:56:54.299535  357750 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 18:56:54.319330  357750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 18:56:54.332407  357750 docker.go:218] disabling cri-docker service (if available) ...
	I1027 18:56:54.332468  357750 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 18:56:54.349634  357750 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 18:56:54.368222  357750 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 18:56:54.451890  357750 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 18:56:54.544707  357750 docker.go:234] disabling docker service ...
	I1027 18:56:54.544772  357750 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 18:56:54.564677  357750 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 18:56:54.578425  357750 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 18:56:54.664330  357750 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 18:56:54.751537  357750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 18:56:54.765429  357750 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 18:56:54.780905  357750 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 18:56:54.780984  357750 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:56:54.792531  357750 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 18:56:54.792606  357750 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:56:54.802483  357750 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:56:54.812394  357750 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:56:54.822074  357750 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 18:56:54.831168  357750 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:56:54.840833  357750 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:56:54.855842  357750 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 18:56:54.865391  357750 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 18:56:54.873511  357750 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 18:56:54.881518  357750 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 18:56:54.961828  357750 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 18:56:55.073756  357750 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 18:56:55.073828  357750 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 18:56:55.078174  357750 start.go:563] Will wait 60s for crictl version
	I1027 18:56:55.078228  357750 ssh_runner.go:195] Run: which crictl
	I1027 18:56:55.082359  357750 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 18:56:55.110435  357750 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 18:56:55.110530  357750 ssh_runner.go:195] Run: crio --version
	I1027 18:56:55.139360  357750 ssh_runner.go:195] Run: crio --version
	I1027 18:56:55.169621  357750 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 18:56:55.171067  357750 cli_runner.go:164] Run: docker network inspect addons-589824 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 18:56:55.189273  357750 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1027 18:56:55.193853  357750 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 18:56:55.205241  357750 kubeadm.go:883] updating cluster {Name:addons-589824 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-589824 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 18:56:55.205421  357750 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 18:56:55.205479  357750 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 18:56:55.237795  357750 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 18:56:55.237819  357750 crio.go:433] Images already preloaded, skipping extraction
	I1027 18:56:55.237866  357750 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 18:56:55.265648  357750 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 18:56:55.265671  357750 cache_images.go:85] Images are preloaded, skipping loading
	I1027 18:56:55.265680  357750 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1027 18:56:55.265769  357750 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-589824 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-589824 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 18:56:55.265839  357750 ssh_runner.go:195] Run: crio config
	I1027 18:56:55.315863  357750 cni.go:84] Creating CNI manager for ""
	I1027 18:56:55.315894  357750 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 18:56:55.315923  357750 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 18:56:55.315955  357750 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-589824 NodeName:addons-589824 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 18:56:55.316131  357750 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-589824"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 18:56:55.316251  357750 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 18:56:55.325119  357750 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 18:56:55.325209  357750 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 18:56:55.333671  357750 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1027 18:56:55.347688  357750 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 18:56:55.365313  357750 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1027 18:56:55.379065  357750 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1027 18:56:55.383214  357750 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 18:56:55.393678  357750 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 18:56:55.475622  357750 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 18:56:55.500012  357750 certs.go:69] Setting up /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824 for IP: 192.168.49.2
	I1027 18:56:55.500050  357750 certs.go:195] generating shared ca certs ...
	I1027 18:56:55.500071  357750 certs.go:227] acquiring lock for ca certs: {Name:mk4bdbca32068f6f817fc35fdc496e961dc3e0d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:55.500243  357750 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21801-352833/.minikube/ca.key
	I1027 18:56:55.715980  357750 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt ...
	I1027 18:56:55.716019  357750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt: {Name:mk44f63d199fa400a2827298fa03b78f2ed37f0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:55.716256  357750 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-352833/.minikube/ca.key ...
	I1027 18:56:55.716276  357750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/ca.key: {Name:mk77897f052d08f6c3cf1811127f99888464704d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:55.716368  357750 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.key
	I1027 18:56:55.825508  357750 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.crt ...
	I1027 18:56:55.825543  357750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.crt: {Name:mkccbad3f1bcadbd55a94e0cd6d1d1c31beab8ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:55.825726  357750 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.key ...
	I1027 18:56:55.825738  357750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.key: {Name:mk02f870bfeb39e7048e30d37d8283191317e991 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:55.825805  357750 certs.go:257] generating profile certs ...
	I1027 18:56:55.825868  357750 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/client.key
	I1027 18:56:55.825882  357750 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/client.crt with IP's: []
	I1027 18:56:55.977322  357750 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/client.crt ...
	I1027 18:56:55.977358  357750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/client.crt: {Name:mk11bcab359d1a2cac5f29bcc03417bf021ca8fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:55.977541  357750 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/client.key ...
	I1027 18:56:55.977553  357750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/client.key: {Name:mkc8659cd46457b56bd99c551ba501ba5e96a71c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:55.977625  357750 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/apiserver.key.750c5106
	I1027 18:56:55.977644  357750 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/apiserver.crt.750c5106 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1027 18:56:56.079289  357750 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/apiserver.crt.750c5106 ...
	I1027 18:56:56.079323  357750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/apiserver.crt.750c5106: {Name:mk38b7d109dac7bba4e8ea89f6c34772ad93a1c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:56.079494  357750 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/apiserver.key.750c5106 ...
	I1027 18:56:56.079510  357750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/apiserver.key.750c5106: {Name:mk70a14b973b8c7b46c2933f10da41c1a6cbb51e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:56.079584  357750 certs.go:382] copying /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/apiserver.crt.750c5106 -> /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/apiserver.crt
	I1027 18:56:56.079680  357750 certs.go:386] copying /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/apiserver.key.750c5106 -> /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/apiserver.key
	I1027 18:56:56.079729  357750 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/proxy-client.key
	I1027 18:56:56.079748  357750 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/proxy-client.crt with IP's: []
	I1027 18:56:56.389885  357750 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/proxy-client.crt ...
	I1027 18:56:56.389923  357750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/proxy-client.crt: {Name:mka57fa39da97889933f822557c0bf7e18955f0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:56.390114  357750 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/proxy-client.key ...
	I1027 18:56:56.390130  357750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/proxy-client.key: {Name:mke2e38d668075c4ade04ae6e6ee0f95aced8745 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:56.390338  357750 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 18:56:56.390375  357750 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem (1078 bytes)
	I1027 18:56:56.390402  357750 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem (1123 bytes)
	I1027 18:56:56.390428  357750 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem (1679 bytes)
	I1027 18:56:56.391043  357750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 18:56:56.410858  357750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 18:56:56.429623  357750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 18:56:56.448487  357750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1027 18:56:56.467164  357750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1027 18:56:56.486472  357750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 18:56:56.505284  357750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 18:56:56.524365  357750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 18:56:56.543324  357750 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 18:56:56.564260  357750 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 18:56:56.578035  357750 ssh_runner.go:195] Run: openssl version
	I1027 18:56:56.584770  357750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 18:56:56.596606  357750 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 18:56:56.600637  357750 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1027 18:56:56.600717  357750 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 18:56:56.634788  357750 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 18:56:56.644348  357750 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 18:56:56.648324  357750 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 18:56:56.648380  357750 kubeadm.go:400] StartCluster: {Name:addons-589824 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-589824 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 18:56:56.648446  357750 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 18:56:56.648509  357750 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 18:56:56.677863  357750 cri.go:89] found id: ""
	I1027 18:56:56.677976  357750 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 18:56:56.686751  357750 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 18:56:56.695694  357750 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1027 18:56:56.695757  357750 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 18:56:56.704372  357750 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 18:56:56.704402  357750 kubeadm.go:157] found existing configuration files:
	
	I1027 18:56:56.704453  357750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 18:56:56.712983  357750 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 18:56:56.713048  357750 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 18:56:56.721724  357750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 18:56:56.730011  357750 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 18:56:56.730077  357750 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 18:56:56.738490  357750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 18:56:56.747048  357750 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 18:56:56.747104  357750 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 18:56:56.755384  357750 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 18:56:56.763784  357750 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 18:56:56.763835  357750 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 18:56:56.771819  357750 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 18:56:56.811666  357750 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1027 18:56:56.811750  357750 kubeadm.go:318] [preflight] Running pre-flight checks
	I1027 18:56:56.833868  357750 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1027 18:56:56.833957  357750 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1027 18:56:56.834009  357750 kubeadm.go:318] OS: Linux
	I1027 18:56:56.834103  357750 kubeadm.go:318] CGROUPS_CPU: enabled
	I1027 18:56:56.834193  357750 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1027 18:56:56.834250  357750 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1027 18:56:56.834327  357750 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1027 18:56:56.834398  357750 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1027 18:56:56.834473  357750 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1027 18:56:56.834524  357750 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1027 18:56:56.834560  357750 kubeadm.go:318] CGROUPS_IO: enabled
	I1027 18:56:56.908261  357750 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 18:56:56.908413  357750 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 18:56:56.908569  357750 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 18:56:56.917843  357750 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 18:56:56.921762  357750 out.go:252]   - Generating certificates and keys ...
	I1027 18:56:56.921898  357750 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1027 18:56:56.922001  357750 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1027 18:56:57.231482  357750 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 18:56:57.386011  357750 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1027 18:56:57.669283  357750 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1027 18:56:57.820597  357750 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1027 18:56:58.074441  357750 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1027 18:56:58.074598  357750 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-589824 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1027 18:56:58.183627  357750 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1027 18:56:58.183838  357750 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-589824 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1027 18:56:58.753158  357750 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 18:56:59.117691  357750 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 18:56:59.312307  357750 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1027 18:56:59.312393  357750 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 18:56:59.809792  357750 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 18:57:00.239622  357750 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 18:57:00.446767  357750 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 18:57:00.597313  357750 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 18:57:00.790239  357750 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 18:57:00.790695  357750 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 18:57:00.794923  357750 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 18:57:00.796525  357750 out.go:252]   - Booting up control plane ...
	I1027 18:57:00.796633  357750 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 18:57:00.796764  357750 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 18:57:00.797320  357750 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 18:57:00.811724  357750 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 18:57:00.811902  357750 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 18:57:00.820281  357750 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 18:57:00.820428  357750 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 18:57:00.820494  357750 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1027 18:57:00.923742  357750 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 18:57:00.923919  357750 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 18:57:01.425534  357750 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 501.935238ms
	I1027 18:57:01.429591  357750 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 18:57:01.429755  357750 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1027 18:57:01.429895  357750 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 18:57:01.430018  357750 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 18:57:02.938314  357750 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.508653425s
	I1027 18:57:03.884339  357750 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.451954649s
	I1027 18:57:05.933058  357750 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.503402444s
	I1027 18:57:05.947434  357750 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 18:57:05.964173  357750 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 18:57:05.976615  357750 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 18:57:05.976918  357750 kubeadm.go:318] [mark-control-plane] Marking the node addons-589824 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 18:57:05.986438  357750 kubeadm.go:318] [bootstrap-token] Using token: ll4eiv.hma7u1nr1623ia8e
	I1027 18:57:05.987933  357750 out.go:252]   - Configuring RBAC rules ...
	I1027 18:57:05.988086  357750 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 18:57:05.992476  357750 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 18:57:05.999042  357750 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 18:57:06.002113  357750 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 18:57:06.006323  357750 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 18:57:06.009518  357750 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 18:57:06.339713  357750 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 18:57:06.760751  357750 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1027 18:57:07.338899  357750 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1027 18:57:07.339766  357750 kubeadm.go:318] 
	I1027 18:57:07.339863  357750 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1027 18:57:07.339875  357750 kubeadm.go:318] 
	I1027 18:57:07.339991  357750 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1027 18:57:07.340015  357750 kubeadm.go:318] 
	I1027 18:57:07.340072  357750 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1027 18:57:07.340198  357750 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 18:57:07.340265  357750 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 18:57:07.340272  357750 kubeadm.go:318] 
	I1027 18:57:07.340339  357750 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1027 18:57:07.340345  357750 kubeadm.go:318] 
	I1027 18:57:07.340390  357750 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 18:57:07.340396  357750 kubeadm.go:318] 
	I1027 18:57:07.340439  357750 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1027 18:57:07.340505  357750 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 18:57:07.340564  357750 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 18:57:07.340570  357750 kubeadm.go:318] 
	I1027 18:57:07.340648  357750 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 18:57:07.340717  357750 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1027 18:57:07.340723  357750 kubeadm.go:318] 
	I1027 18:57:07.340793  357750 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token ll4eiv.hma7u1nr1623ia8e \
	I1027 18:57:07.340884  357750 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ab29e81999671591f366788f5ae9ffb132789ebc71f7c0efdaecd38575a5ab6a \
	I1027 18:57:07.340904  357750 kubeadm.go:318] 	--control-plane 
	I1027 18:57:07.340922  357750 kubeadm.go:318] 
	I1027 18:57:07.341025  357750 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1027 18:57:07.341035  357750 kubeadm.go:318] 
	I1027 18:57:07.341142  357750 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token ll4eiv.hma7u1nr1623ia8e \
	I1027 18:57:07.341276  357750 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ab29e81999671591f366788f5ae9ffb132789ebc71f7c0efdaecd38575a5ab6a 
	I1027 18:57:07.343780  357750 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1027 18:57:07.343917  357750 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 18:57:07.343952  357750 cni.go:84] Creating CNI manager for ""
	I1027 18:57:07.343965  357750 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 18:57:07.346085  357750 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1027 18:57:07.347565  357750 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 18:57:07.352448  357750 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 18:57:07.352468  357750 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1027 18:57:07.366292  357750 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 18:57:07.575829  357750 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 18:57:07.575906  357750 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:07.575924  357750 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-589824 minikube.k8s.io/updated_at=2025_10_27T18_57_07_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f minikube.k8s.io/name=addons-589824 minikube.k8s.io/primary=true
	I1027 18:57:07.665657  357750 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:07.665674  357750 ops.go:34] apiserver oom_adj: -16
	I1027 18:57:08.165792  357750 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:08.666386  357750 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:09.165850  357750 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:09.666299  357750 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:10.166632  357750 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:10.666391  357750 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:11.166719  357750 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:11.666054  357750 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:12.166540  357750 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 18:57:12.232521  357750 kubeadm.go:1113] duration metric: took 4.656676265s to wait for elevateKubeSystemPrivileges
	I1027 18:57:12.232546  357750 kubeadm.go:402] duration metric: took 15.584173488s to StartCluster
	I1027 18:57:12.232563  357750 settings.go:142] acquiring lock: {Name:mk8304c2106bf310642e0949fc0266ccb50f2f95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:57:12.232689  357750 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 18:57:12.233238  357750 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/kubeconfig: {Name:mk24cbe512a6907c874f3fb7a85390a8f9fd2b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:57:12.233491  357750 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 18:57:12.233507  357750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 18:57:12.233597  357750 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1027 18:57:12.233710  357750 config.go:182] Loaded profile config "addons-589824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:57:12.233763  357750 addons.go:69] Setting default-storageclass=true in profile "addons-589824"
	I1027 18:57:12.233774  357750 addons.go:69] Setting gcp-auth=true in profile "addons-589824"
	I1027 18:57:12.233776  357750 addons.go:69] Setting yakd=true in profile "addons-589824"
	I1027 18:57:12.233786  357750 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-589824"
	I1027 18:57:12.233792  357750 mustload.go:65] Loading cluster: addons-589824
	I1027 18:57:12.233797  357750 addons.go:238] Setting addon yakd=true in "addons-589824"
	I1027 18:57:12.233814  357750 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-589824"
	I1027 18:57:12.233843  357750 addons.go:69] Setting ingress-dns=true in profile "addons-589824"
	I1027 18:57:12.233834  357750 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-589824"
	I1027 18:57:12.233866  357750 addons.go:69] Setting inspektor-gadget=true in profile "addons-589824"
	I1027 18:57:12.233881  357750 addons.go:238] Setting addon inspektor-gadget=true in "addons-589824"
	I1027 18:57:12.233898  357750 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-589824"
	I1027 18:57:12.233908  357750 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:57:12.233933  357750 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-589824"
	I1027 18:57:12.233959  357750 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:57:12.233973  357750 config.go:182] Loaded profile config "addons-589824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:57:12.233981  357750 addons.go:69] Setting cloud-spanner=true in profile "addons-589824"
	I1027 18:57:12.233997  357750 addons.go:238] Setting addon cloud-spanner=true in "addons-589824"
	I1027 18:57:12.234022  357750 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:57:12.234240  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:57:12.234281  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:57:12.234409  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:57:12.234487  357750 addons.go:69] Setting registry-creds=true in profile "addons-589824"
	I1027 18:57:12.234511  357750 addons.go:238] Setting addon registry-creds=true in "addons-589824"
	I1027 18:57:12.234546  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:57:12.234562  357750 addons.go:69] Setting metrics-server=true in profile "addons-589824"
	I1027 18:57:12.234579  357750 addons.go:69] Setting registry=true in profile "addons-589824"
	I1027 18:57:12.234593  357750 addons.go:238] Setting addon registry=true in "addons-589824"
	I1027 18:57:12.234603  357750 addons.go:238] Setting addon metrics-server=true in "addons-589824"
	I1027 18:57:12.234613  357750 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:57:12.234623  357750 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:57:12.234743  357750 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-589824"
	I1027 18:57:12.234750  357750 addons.go:69] Setting volcano=true in profile "addons-589824"
	I1027 18:57:12.234779  357750 addons.go:69] Setting storage-provisioner=true in profile "addons-589824"
	I1027 18:57:12.234798  357750 addons.go:238] Setting addon storage-provisioner=true in "addons-589824"
	I1027 18:57:12.234827  357750 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:57:12.234850  357750 addons.go:238] Setting addon volcano=true in "addons-589824"
	I1027 18:57:12.234909  357750 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:57:12.235098  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:57:12.235105  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:57:12.234770  357750 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-589824"
	I1027 18:57:12.233833  357750 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:57:12.236816  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:57:12.233976  357750 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:57:12.234551  357750 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:57:12.237479  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:57:12.237615  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:57:12.234565  357750 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-589824"
	I1027 18:57:12.237701  357750 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-589824"
	I1027 18:57:12.237743  357750 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:57:12.237790  357750 addons.go:69] Setting volumesnapshots=true in profile "addons-589824"
	I1027 18:57:12.237813  357750 addons.go:238] Setting addon volumesnapshots=true in "addons-589824"
	I1027 18:57:12.237842  357750 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:57:12.234546  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:57:12.233859  357750 addons.go:238] Setting addon ingress-dns=true in "addons-589824"
	I1027 18:57:12.238086  357750 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:57:12.238299  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:57:12.238417  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:57:12.239853  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:57:12.241412  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:57:12.242559  357750 out.go:179] * Verifying Kubernetes components...
	I1027 18:57:12.233767  357750 addons.go:69] Setting ingress=true in profile "addons-589824"
	I1027 18:57:12.243395  357750 addons.go:238] Setting addon ingress=true in "addons-589824"
	I1027 18:57:12.243478  357750 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:57:12.244500  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:57:12.248856  357750 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 18:57:12.258320  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:57:12.262972  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:57:12.283314  357750 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:57:12.295619  357750 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1027 18:57:12.296422  357750 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	I1027 18:57:12.297005  357750 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1027 18:57:12.297025  357750 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1027 18:57:12.297104  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:57:12.297692  357750 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1027 18:57:12.297709  357750 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1027 18:57:12.297808  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:57:12.298613  357750 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1027 18:57:12.301858  357750 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1027 18:57:12.304476  357750 addons.go:238] Setting addon default-storageclass=true in "addons-589824"
	I1027 18:57:12.304537  357750 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:57:12.305127  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	W1027 18:57:12.305446  357750 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1027 18:57:12.306042  357750 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1027 18:57:12.306236  357750 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1027 18:57:12.306260  357750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1027 18:57:12.306331  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:57:12.308318  357750 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1027 18:57:12.308393  357750 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1027 18:57:12.309943  357750 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1027 18:57:12.309966  357750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1027 18:57:12.310035  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:57:12.311441  357750 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1027 18:57:12.314358  357750 out.go:179]   - Using image docker.io/registry:3.0.0
	I1027 18:57:12.315960  357750 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1027 18:57:12.320880  357750 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1027 18:57:12.320952  357750 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1027 18:57:12.325327  357750 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1027 18:57:12.325354  357750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1027 18:57:12.325426  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:57:12.330206  357750 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1027 18:57:12.334567  357750 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1027 18:57:12.337443  357750 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1027 18:57:12.339740  357750 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-589824"
	I1027 18:57:12.339796  357750 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:57:12.340297  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:57:12.342051  357750 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1027 18:57:12.342074  357750 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1027 18:57:12.342151  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:57:12.343584  357750 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1027 18:57:12.343603  357750 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1027 18:57:12.343667  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:57:12.344506  357750 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1027 18:57:12.346025  357750 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1027 18:57:12.346042  357750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1027 18:57:12.346101  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:57:12.348080  357750 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1027 18:57:12.349632  357750 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1027 18:57:12.351863  357750 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1027 18:57:12.353323  357750 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1027 18:57:12.353354  357750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1027 18:57:12.353431  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:57:12.361798  357750 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1027 18:57:12.366319  357750 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1027 18:57:12.369247  357750 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1027 18:57:12.369286  357750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1027 18:57:12.369365  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:57:12.371323  357750 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1027 18:57:12.371352  357750 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1027 18:57:12.371436  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:57:12.385506  357750 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 18:57:12.387925  357750 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 18:57:12.387953  357750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 18:57:12.388026  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:57:12.395178  357750 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1027 18:57:12.396344  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:57:12.396523  357750 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1027 18:57:12.396543  357750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1027 18:57:12.396620  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:57:12.403855  357750 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 18:57:12.404464  357750 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 18:57:12.404626  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:57:12.407940  357750 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 18:57:12.410552  357750 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1027 18:57:12.411659  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:57:12.413951  357750 out.go:179]   - Using image docker.io/busybox:stable
	I1027 18:57:12.415427  357750 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1027 18:57:12.415445  357750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1027 18:57:12.415525  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:57:12.424827  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:57:12.425472  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:57:12.426453  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:57:12.427157  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:57:12.427824  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:57:12.430254  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:57:12.433151  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:57:12.435341  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:57:12.437790  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	W1027 18:57:12.446277  357750 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1027 18:57:12.446458  357750 retry.go:31] will retry after 316.324147ms: ssh: handshake failed: EOF
	I1027 18:57:12.446378  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:57:12.457231  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	W1027 18:57:12.463308  357750 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1027 18:57:12.463404  357750 retry.go:31] will retry after 233.328096ms: ssh: handshake failed: EOF
	I1027 18:57:12.467761  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:57:12.473735  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	W1027 18:57:12.477829  357750 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1027 18:57:12.477858  357750 retry.go:31] will retry after 200.746442ms: ssh: handshake failed: EOF
	I1027 18:57:12.521921  357750 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 18:57:12.576307  357750 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1027 18:57:12.576343  357750 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1027 18:57:12.576638  357750 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1027 18:57:12.576661  357750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1027 18:57:12.590988  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1027 18:57:12.596742  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 18:57:12.598902  357750 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1027 18:57:12.598929  357750 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1027 18:57:12.600733  357750 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1027 18:57:12.600758  357750 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1027 18:57:12.605339  357750 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1027 18:57:12.605432  357750 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1027 18:57:12.632114  357750 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1027 18:57:12.632154  357750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1027 18:57:12.635358  357750 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:12.635386  357750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1027 18:57:12.638751  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1027 18:57:12.640759  357750 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1027 18:57:12.640783  357750 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1027 18:57:12.644031  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1027 18:57:12.645007  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1027 18:57:12.649734  357750 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1027 18:57:12.649760  357750 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1027 18:57:12.663467  357750 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1027 18:57:12.663502  357750 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1027 18:57:12.666781  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1027 18:57:12.672294  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:12.672940  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1027 18:57:12.675362  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1027 18:57:12.688262  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1027 18:57:12.700648  357750 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1027 18:57:12.700694  357750 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1027 18:57:12.702874  357750 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1027 18:57:12.702966  357750 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1027 18:57:12.740693  357750 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1027 18:57:12.740724  357750 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1027 18:57:12.758058  357750 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1027 18:57:12.758154  357750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1027 18:57:12.776771  357750 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1027 18:57:12.776887  357750 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1027 18:57:12.797989  357750 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1027 18:57:12.800179  357750 node_ready.go:35] waiting up to 6m0s for node "addons-589824" to be "Ready" ...
	I1027 18:57:12.806349  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1027 18:57:12.826392  357750 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1027 18:57:12.826426  357750 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1027 18:57:12.889726  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1027 18:57:12.913075  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 18:57:12.922814  357750 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1027 18:57:12.922841  357750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1027 18:57:12.982744  357750 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1027 18:57:12.982775  357750 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1027 18:57:13.017618  357750 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1027 18:57:13.017647  357750 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1027 18:57:13.043806  357750 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1027 18:57:13.043839  357750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1027 18:57:13.084632  357750 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1027 18:57:13.084725  357750 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1027 18:57:13.124708  357750 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1027 18:57:13.124748  357750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1027 18:57:13.163085  357750 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1027 18:57:13.163122  357750 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1027 18:57:13.200594  357750 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1027 18:57:13.200636  357750 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1027 18:57:13.229807  357750 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1027 18:57:13.229837  357750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1027 18:57:13.255243  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1027 18:57:13.284691  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1027 18:57:13.303994  357750 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-589824" context rescaled to 1 replicas
	I1027 18:57:13.621741  357750 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.024955931s)
	I1027 18:57:13.623669  357750 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.032623239s)
	I1027 18:57:13.885197  357750 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.218373268s)
	I1027 18:57:13.885242  357750 addons.go:479] Verifying addon ingress=true in "addons-589824"
	I1027 18:57:13.885307  357750 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.212971719s)
	I1027 18:57:13.885378  357750 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.209993765s)
	I1027 18:57:13.885411  357750 addons.go:479] Verifying addon registry=true in "addons-589824"
	I1027 18:57:13.885490  357750 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.197192096s)
	I1027 18:57:13.885520  357750 addons.go:479] Verifying addon metrics-server=true in "addons-589824"
	I1027 18:57:13.885346  357750 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.212373152s)
	W1027 18:57:13.885347  357750 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:13.885633  357750 retry.go:31] will retry after 305.173504ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:13.887770  357750 out.go:179] * Verifying ingress addon...
	I1027 18:57:13.887804  357750 out.go:179] * Verifying registry addon...
	I1027 18:57:13.889787  357750 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1027 18:57:13.889822  357750 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1027 18:57:13.892795  357750 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1027 18:57:13.892904  357750 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1027 18:57:13.892923  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:14.191837  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:14.317794  357750 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.511395496s)
	W1027 18:57:14.317847  357750 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1027 18:57:14.317875  357750 retry.go:31] will retry after 180.995068ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1027 18:57:14.317905  357750 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.428148396s)
	I1027 18:57:14.317987  357750 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.404881701s)
	I1027 18:57:14.318406  357750 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.063111192s)
	I1027 18:57:14.318438  357750 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-589824"
	I1027 18:57:14.318765  357750 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.034028321s)
	I1027 18:57:14.320352  357750 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-589824 service yakd-dashboard -n yakd-dashboard
	
	I1027 18:57:14.320445  357750 out.go:179] * Verifying csi-hostpath-driver addon...
	I1027 18:57:14.322990  357750 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1027 18:57:14.330398  357750 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1027 18:57:14.330517  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1027 18:57:14.335248  357750 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class csi-hostpath-sc as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "csi-hostpath-sc": the object has been modified; please apply your changes to the latest version and try again]
	I1027 18:57:14.394203  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:14.394306  357750 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1027 18:57:14.394325  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:14.499596  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1027 18:57:14.803404  357750 node_ready.go:57] node "addons-589824" has "Ready":"False" status (will retry)
	I1027 18:57:14.826980  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1027 18:57:14.865618  357750 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:14.865656  357750 retry.go:31] will retry after 211.067145ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:14.893599  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:14.893810  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:15.077024  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:15.326361  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:15.392689  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:15.392756  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:15.827405  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:15.893589  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:15.893772  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:16.326327  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:16.393309  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:16.393309  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1027 18:57:16.803936  357750 node_ready.go:57] node "addons-589824" has "Ready":"False" status (will retry)
	I1027 18:57:16.826536  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:16.927095  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:16.927407  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:17.036467  357750 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.53680921s)
	I1027 18:57:17.036563  357750 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.959497191s)
	W1027 18:57:17.036606  357750 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:17.036635  357750 retry.go:31] will retry after 790.979447ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:17.327341  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:17.428402  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:17.428452  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:17.827179  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:17.828200  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:17.892888  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:17.893075  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:18.327499  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:18.392768  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:18.392848  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1027 18:57:18.393435  357750 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:18.393461  357750 retry.go:31] will retry after 991.470073ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:18.826722  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:18.893711  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:18.893979  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1027 18:57:19.302903  357750 node_ready.go:57] node "addons-589824" has "Ready":"False" status (will retry)
	I1027 18:57:19.328526  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:19.385611  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:19.392910  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:19.392984  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:19.827452  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:19.893010  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:19.893077  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:19.902429  357750 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1027 18:57:19.902517  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:57:19.923575  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	W1027 18:57:19.956481  357750 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:19.956525  357750 retry.go:31] will retry after 1.650834557s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:20.032158  357750 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1027 18:57:20.045081  357750 addons.go:238] Setting addon gcp-auth=true in "addons-589824"
	I1027 18:57:20.045151  357750 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:57:20.045672  357750 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:57:20.064402  357750 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1027 18:57:20.064462  357750 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:57:20.083148  357750 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:57:20.183714  357750 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1027 18:57:20.185108  357750 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1027 18:57:20.186324  357750 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1027 18:57:20.186342  357750 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1027 18:57:20.201032  357750 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1027 18:57:20.201072  357750 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1027 18:57:20.215117  357750 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1027 18:57:20.215159  357750 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1027 18:57:20.229544  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1027 18:57:20.327531  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:20.393454  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:20.393530  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:20.553465  357750 addons.go:479] Verifying addon gcp-auth=true in "addons-589824"
	I1027 18:57:20.554686  357750 out.go:179] * Verifying gcp-auth addon...
	I1027 18:57:20.557676  357750 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1027 18:57:20.560786  357750 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1027 18:57:20.560810  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:20.826368  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:20.893436  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:20.893596  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:21.061467  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:57:21.303672  357750 node_ready.go:57] node "addons-589824" has "Ready":"False" status (will retry)
	I1027 18:57:21.326753  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:21.394089  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:21.394216  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:21.561856  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:21.607899  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:21.826862  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:21.893731  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:21.893762  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:22.061682  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:57:22.172966  357750 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:22.173003  357750 retry.go:31] will retry after 1.702668474s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:22.326642  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:22.393584  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:22.393743  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:22.560728  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:22.826515  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:22.893513  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:22.893694  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:23.060890  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:57:23.304062  357750 node_ready.go:57] node "addons-589824" has "Ready":"False" status (will retry)
	I1027 18:57:23.326126  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:23.393246  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:23.393250  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:23.561309  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:23.826902  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:23.875927  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:23.893829  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:23.893849  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:24.060909  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:24.326466  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:24.393536  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:24.393674  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1027 18:57:24.445779  357750 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:24.445813  357750 retry.go:31] will retry after 2.853721544s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:24.560702  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:24.826571  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:24.893347  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:24.893546  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:25.061595  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:25.326667  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:25.393652  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:25.393734  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:25.561947  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:57:25.803972  357750 node_ready.go:57] node "addons-589824" has "Ready":"False" status (will retry)
	I1027 18:57:25.827550  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:25.893485  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:25.893722  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:26.061575  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:26.326734  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:26.393831  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:26.393991  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:26.560565  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:26.826927  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:26.892798  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:26.892850  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:27.060764  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:27.299955  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:27.327189  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:27.393319  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:27.393465  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:27.562907  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:27.827155  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1027 18:57:27.866554  357750 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:27.866598  357750 retry.go:31] will retry after 2.412375323s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:27.893638  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:27.893749  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:28.060887  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:57:28.303812  357750 node_ready.go:57] node "addons-589824" has "Ready":"False" status (will retry)
	I1027 18:57:28.326548  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:28.393454  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:28.393704  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:28.561545  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:28.826479  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:28.893365  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:28.893575  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:29.061703  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:29.326771  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:29.393653  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:29.393905  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:29.561389  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:29.827004  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:29.892783  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:29.892856  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:30.060518  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:30.279801  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:30.326039  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:30.392721  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:30.392887  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:30.560856  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:57:30.803556  357750 node_ready.go:57] node "addons-589824" has "Ready":"False" status (will retry)
	I1027 18:57:30.826615  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1027 18:57:30.844100  357750 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:30.844150  357750 retry.go:31] will retry after 8.393736916s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:30.893225  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:30.893271  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:31.061047  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:31.326257  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:31.393307  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:31.393374  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:31.561284  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:31.827160  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:31.893436  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:31.893493  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:32.061229  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:32.326355  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:32.393316  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:32.393391  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:32.561274  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:32.826972  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:32.893188  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:32.893374  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:33.061419  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:57:33.303322  357750 node_ready.go:57] node "addons-589824" has "Ready":"False" status (will retry)
	I1027 18:57:33.326191  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:33.392934  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:33.393147  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:33.560930  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:33.826182  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:33.893115  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:33.893226  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:34.061254  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:34.326189  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:34.393082  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:34.393392  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:34.561353  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:34.827061  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:34.892955  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:34.893085  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:35.061100  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:57:35.304240  357750 node_ready.go:57] node "addons-589824" has "Ready":"False" status (will retry)
	I1027 18:57:35.326369  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:35.393356  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:35.393426  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:35.562231  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:35.826644  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:35.893797  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:35.894043  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:36.060795  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:36.326929  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:36.392898  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:36.393144  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:36.561235  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:36.826022  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:36.893061  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:36.893128  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:37.061055  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:37.326853  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:37.393983  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:37.394046  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:37.561462  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:57:37.803225  357750 node_ready.go:57] node "addons-589824" has "Ready":"False" status (will retry)
	I1027 18:57:37.826737  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:37.893658  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:37.893810  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:38.061150  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:38.326997  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:38.392750  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:38.392796  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:38.560708  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:38.826707  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:38.894646  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:38.894780  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:39.061504  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:39.238827  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:39.326452  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:39.393669  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:39.393882  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:39.560996  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:57:39.807425  357750 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:39.807473  357750 retry.go:31] will retry after 9.722408552s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:39.826295  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:39.893344  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:39.893480  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:40.061449  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:57:40.303453  357750 node_ready.go:57] node "addons-589824" has "Ready":"False" status (will retry)
	I1027 18:57:40.326300  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:40.393478  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:40.393741  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:40.561811  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:40.826790  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:40.893346  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:40.893388  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:41.061562  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:41.326553  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:41.393796  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:41.394004  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:41.560905  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:41.826556  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:41.893471  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:41.893629  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:42.061579  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:57:42.303797  357750 node_ready.go:57] node "addons-589824" has "Ready":"False" status (will retry)
	I1027 18:57:42.327007  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:42.392881  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:42.393023  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:42.561003  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:42.826815  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:42.894128  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:42.894269  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:43.061503  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:43.326703  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:43.393565  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:43.393683  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:43.561479  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:43.826545  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:43.893460  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:43.893533  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:44.061609  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:44.326486  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:44.393716  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:44.393993  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:44.560700  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:57:44.803580  357750 node_ready.go:57] node "addons-589824" has "Ready":"False" status (will retry)
	I1027 18:57:44.826346  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:44.893672  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:44.894589  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:45.060806  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:45.326102  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:45.392980  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:45.393145  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:45.561882  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:45.826109  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:45.892939  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:45.893187  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:46.060900  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:46.326947  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:46.393060  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:46.393331  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:46.561441  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:57:46.803697  357750 node_ready.go:57] node "addons-589824" has "Ready":"False" status (will retry)
	I1027 18:57:46.826700  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:46.893855  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:46.893926  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:47.061110  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:47.326658  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:47.393687  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:47.393813  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:47.560709  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:47.827378  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:47.893454  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:47.893637  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:48.060810  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:48.326715  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:48.394046  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:48.394252  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:48.561244  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:48.826349  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:48.893295  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:48.893360  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:49.061046  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:57:49.303198  357750 node_ready.go:57] node "addons-589824" has "Ready":"False" status (will retry)
	I1027 18:57:49.325946  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:49.393199  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:49.393343  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:49.530578  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:49.560948  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:49.827612  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:49.893879  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:49.893948  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:50.061420  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:57:50.094428  357750 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:50.094465  357750 retry.go:31] will retry after 8.260223514s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:50.326537  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:50.393505  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:50.393555  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:50.561501  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:50.826875  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:50.892896  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:50.893039  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:51.061225  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:57:51.303343  357750 node_ready.go:57] node "addons-589824" has "Ready":"False" status (will retry)
	I1027 18:57:51.326318  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:51.393233  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:51.393403  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:51.561550  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:51.826157  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:51.893017  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:51.893223  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:52.061032  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:52.326127  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:52.392908  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:52.393023  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:52.561376  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:52.825974  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:52.893076  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:52.893243  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:53.061431  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:53.303329  357750 node_ready.go:49] node "addons-589824" is "Ready"
	I1027 18:57:53.303372  357750 node_ready.go:38] duration metric: took 40.503152177s for node "addons-589824" to be "Ready" ...
	I1027 18:57:53.303396  357750 api_server.go:52] waiting for apiserver process to appear ...
	I1027 18:57:53.303472  357750 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 18:57:53.319020  357750 api_server.go:72] duration metric: took 41.085489885s to wait for apiserver process to appear ...
	I1027 18:57:53.319050  357750 api_server.go:88] waiting for apiserver healthz status ...
	I1027 18:57:53.319082  357750 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1027 18:57:53.325074  357750 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1027 18:57:53.326191  357750 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1027 18:57:53.326211  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:53.326278  357750 api_server.go:141] control plane version: v1.34.1
	I1027 18:57:53.326307  357750 api_server.go:131] duration metric: took 7.249289ms to wait for apiserver health ...
	I1027 18:57:53.326322  357750 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 18:57:53.331066  357750 system_pods.go:59] 20 kube-system pods found
	I1027 18:57:53.331107  357750 system_pods.go:61] "amd-gpu-device-plugin-6nrwh" [5a9374bd-7f34-436b-aed2-97c869cd1032] Pending
	I1027 18:57:53.331121  357750 system_pods.go:61] "coredns-66bc5c9577-lz5j4" [fe4fbd50-09cd-482f-b62e-9b5926b57e54] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 18:57:53.331145  357750 system_pods.go:61] "csi-hostpath-attacher-0" [534becd1-bea4-43a8-8269-447c5ea9deb6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1027 18:57:53.331154  357750 system_pods.go:61] "csi-hostpath-resizer-0" [38610a93-addc-4526-b959-7aa8963d68e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1027 18:57:53.331160  357750 system_pods.go:61] "csi-hostpathplugin-jlszq" [3c831b0a-7336-491d-9c07-f8fb8692e0bf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1027 18:57:53.331168  357750 system_pods.go:61] "etcd-addons-589824" [60f2cd63-082d-4ce7-9c01-6d7f6be68d2d] Running
	I1027 18:57:53.331173  357750 system_pods.go:61] "kindnet-4rz7d" [6c4e893b-3105-4baa-a073-e2364d1724cb] Running
	I1027 18:57:53.331176  357750 system_pods.go:61] "kube-apiserver-addons-589824" [9637af46-c973-4a7e-ad3d-7d9685db10fd] Running
	I1027 18:57:53.331180  357750 system_pods.go:61] "kube-controller-manager-addons-589824" [0339aca8-8d04-47ae-8947-9f8e7d261bc3] Running
	I1027 18:57:53.331189  357750 system_pods.go:61] "kube-ingress-dns-minikube" [fb9d7bfe-33a0-427f-a31b-c37973e40580] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 18:57:53.331192  357750 system_pods.go:61] "kube-proxy-77bv8" [8cdca916-4b76-4778-9aca-fd1e93ae4ed3] Running
	I1027 18:57:53.331196  357750 system_pods.go:61] "kube-scheduler-addons-589824" [2812a900-927e-4bed-9f4c-5f69d59f14b2] Running
	I1027 18:57:53.331201  357750 system_pods.go:61] "metrics-server-85b7d694d7-6mqmx" [1a22ca13-4aaa-4ac6-b5ad-df2b9ba87dfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 18:57:53.331210  357750 system_pods.go:61] "nvidia-device-plugin-daemonset-5m5rl" [911fc5e9-aa0b-494e-8eff-0c513d2b6625] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1027 18:57:53.331218  357750 system_pods.go:61] "registry-6b586f9694-bvh6h" [3922e9b1-ef70-4fce-b650-f88d2755f9ab] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1027 18:57:53.331224  357750 system_pods.go:61] "registry-creds-764b6fb674-bmdlm" [a18b1d31-61dd-4c8e-864d-c77043f43d5c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 18:57:53.331236  357750 system_pods.go:61] "registry-proxy-62t66" [05d41077-cfc6-442d-baee-0103823e1b16] Pending
	I1027 18:57:53.331240  357750 system_pods.go:61] "snapshot-controller-7d9fbc56b8-jx9vc" [05d8492e-9dd2-485b-a457-dc9625bb6a31] Pending
	I1027 18:57:53.331245  357750 system_pods.go:61] "snapshot-controller-7d9fbc56b8-m2794" [fc10956f-4e9a-4732-aacf-d844aab7d64a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:57:53.331249  357750 system_pods.go:61] "storage-provisioner" [b33a6bd4-fbbc-4726-a6e9-0a5a03e9f7ad] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 18:57:53.331256  357750 system_pods.go:74] duration metric: took 4.922725ms to wait for pod list to return data ...
	I1027 18:57:53.331267  357750 default_sa.go:34] waiting for default service account to be created ...
	I1027 18:57:53.333529  357750 default_sa.go:45] found service account: "default"
	I1027 18:57:53.333552  357750 default_sa.go:55] duration metric: took 2.279416ms for default service account to be created ...
	I1027 18:57:53.333562  357750 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 18:57:53.336887  357750 system_pods.go:86] 20 kube-system pods found
	I1027 18:57:53.336915  357750 system_pods.go:89] "amd-gpu-device-plugin-6nrwh" [5a9374bd-7f34-436b-aed2-97c869cd1032] Pending
	I1027 18:57:53.336923  357750 system_pods.go:89] "coredns-66bc5c9577-lz5j4" [fe4fbd50-09cd-482f-b62e-9b5926b57e54] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 18:57:53.336928  357750 system_pods.go:89] "csi-hostpath-attacher-0" [534becd1-bea4-43a8-8269-447c5ea9deb6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1027 18:57:53.336935  357750 system_pods.go:89] "csi-hostpath-resizer-0" [38610a93-addc-4526-b959-7aa8963d68e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1027 18:57:53.336943  357750 system_pods.go:89] "csi-hostpathplugin-jlszq" [3c831b0a-7336-491d-9c07-f8fb8692e0bf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1027 18:57:53.336947  357750 system_pods.go:89] "etcd-addons-589824" [60f2cd63-082d-4ce7-9c01-6d7f6be68d2d] Running
	I1027 18:57:53.336951  357750 system_pods.go:89] "kindnet-4rz7d" [6c4e893b-3105-4baa-a073-e2364d1724cb] Running
	I1027 18:57:53.336955  357750 system_pods.go:89] "kube-apiserver-addons-589824" [9637af46-c973-4a7e-ad3d-7d9685db10fd] Running
	I1027 18:57:53.336958  357750 system_pods.go:89] "kube-controller-manager-addons-589824" [0339aca8-8d04-47ae-8947-9f8e7d261bc3] Running
	I1027 18:57:53.336963  357750 system_pods.go:89] "kube-ingress-dns-minikube" [fb9d7bfe-33a0-427f-a31b-c37973e40580] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 18:57:53.336973  357750 system_pods.go:89] "kube-proxy-77bv8" [8cdca916-4b76-4778-9aca-fd1e93ae4ed3] Running
	I1027 18:57:53.336978  357750 system_pods.go:89] "kube-scheduler-addons-589824" [2812a900-927e-4bed-9f4c-5f69d59f14b2] Running
	I1027 18:57:53.336982  357750 system_pods.go:89] "metrics-server-85b7d694d7-6mqmx" [1a22ca13-4aaa-4ac6-b5ad-df2b9ba87dfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 18:57:53.336990  357750 system_pods.go:89] "nvidia-device-plugin-daemonset-5m5rl" [911fc5e9-aa0b-494e-8eff-0c513d2b6625] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1027 18:57:53.336997  357750 system_pods.go:89] "registry-6b586f9694-bvh6h" [3922e9b1-ef70-4fce-b650-f88d2755f9ab] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1027 18:57:53.337007  357750 system_pods.go:89] "registry-creds-764b6fb674-bmdlm" [a18b1d31-61dd-4c8e-864d-c77043f43d5c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 18:57:53.337011  357750 system_pods.go:89] "registry-proxy-62t66" [05d41077-cfc6-442d-baee-0103823e1b16] Pending
	I1027 18:57:53.337017  357750 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jx9vc" [05d8492e-9dd2-485b-a457-dc9625bb6a31] Pending
	I1027 18:57:53.337022  357750 system_pods.go:89] "snapshot-controller-7d9fbc56b8-m2794" [fc10956f-4e9a-4732-aacf-d844aab7d64a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:57:53.337028  357750 system_pods.go:89] "storage-provisioner" [b33a6bd4-fbbc-4726-a6e9-0a5a03e9f7ad] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 18:57:53.337043  357750 retry.go:31] will retry after 277.164995ms: missing components: kube-dns
	I1027 18:57:53.393789  357750 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1027 18:57:53.393816  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:53.393838  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:53.564797  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:53.665287  357750 system_pods.go:86] 20 kube-system pods found
	I1027 18:57:53.665331  357750 system_pods.go:89] "amd-gpu-device-plugin-6nrwh" [5a9374bd-7f34-436b-aed2-97c869cd1032] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1027 18:57:53.665342  357750 system_pods.go:89] "coredns-66bc5c9577-lz5j4" [fe4fbd50-09cd-482f-b62e-9b5926b57e54] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 18:57:53.665353  357750 system_pods.go:89] "csi-hostpath-attacher-0" [534becd1-bea4-43a8-8269-447c5ea9deb6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1027 18:57:53.665364  357750 system_pods.go:89] "csi-hostpath-resizer-0" [38610a93-addc-4526-b959-7aa8963d68e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1027 18:57:53.665384  357750 system_pods.go:89] "csi-hostpathplugin-jlszq" [3c831b0a-7336-491d-9c07-f8fb8692e0bf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1027 18:57:53.665391  357750 system_pods.go:89] "etcd-addons-589824" [60f2cd63-082d-4ce7-9c01-6d7f6be68d2d] Running
	I1027 18:57:53.665397  357750 system_pods.go:89] "kindnet-4rz7d" [6c4e893b-3105-4baa-a073-e2364d1724cb] Running
	I1027 18:57:53.665402  357750 system_pods.go:89] "kube-apiserver-addons-589824" [9637af46-c973-4a7e-ad3d-7d9685db10fd] Running
	I1027 18:57:53.665408  357750 system_pods.go:89] "kube-controller-manager-addons-589824" [0339aca8-8d04-47ae-8947-9f8e7d261bc3] Running
	I1027 18:57:53.665416  357750 system_pods.go:89] "kube-ingress-dns-minikube" [fb9d7bfe-33a0-427f-a31b-c37973e40580] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 18:57:53.665420  357750 system_pods.go:89] "kube-proxy-77bv8" [8cdca916-4b76-4778-9aca-fd1e93ae4ed3] Running
	I1027 18:57:53.665427  357750 system_pods.go:89] "kube-scheduler-addons-589824" [2812a900-927e-4bed-9f4c-5f69d59f14b2] Running
	I1027 18:57:53.665445  357750 system_pods.go:89] "metrics-server-85b7d694d7-6mqmx" [1a22ca13-4aaa-4ac6-b5ad-df2b9ba87dfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 18:57:53.665457  357750 system_pods.go:89] "nvidia-device-plugin-daemonset-5m5rl" [911fc5e9-aa0b-494e-8eff-0c513d2b6625] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1027 18:57:53.665474  357750 system_pods.go:89] "registry-6b586f9694-bvh6h" [3922e9b1-ef70-4fce-b650-f88d2755f9ab] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1027 18:57:53.665482  357750 system_pods.go:89] "registry-creds-764b6fb674-bmdlm" [a18b1d31-61dd-4c8e-864d-c77043f43d5c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 18:57:53.665490  357750 system_pods.go:89] "registry-proxy-62t66" [05d41077-cfc6-442d-baee-0103823e1b16] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1027 18:57:53.665500  357750 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jx9vc" [05d8492e-9dd2-485b-a457-dc9625bb6a31] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:57:53.665509  357750 system_pods.go:89] "snapshot-controller-7d9fbc56b8-m2794" [fc10956f-4e9a-4732-aacf-d844aab7d64a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:57:53.665517  357750 system_pods.go:89] "storage-provisioner" [b33a6bd4-fbbc-4726-a6e9-0a5a03e9f7ad] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 18:57:53.665545  357750 retry.go:31] will retry after 352.458417ms: missing components: kube-dns
	I1027 18:57:53.827376  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:53.927472  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:53.927509  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:54.022625  357750 system_pods.go:86] 20 kube-system pods found
	I1027 18:57:54.022660  357750 system_pods.go:89] "amd-gpu-device-plugin-6nrwh" [5a9374bd-7f34-436b-aed2-97c869cd1032] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1027 18:57:54.022666  357750 system_pods.go:89] "coredns-66bc5c9577-lz5j4" [fe4fbd50-09cd-482f-b62e-9b5926b57e54] Running
	I1027 18:57:54.022674  357750 system_pods.go:89] "csi-hostpath-attacher-0" [534becd1-bea4-43a8-8269-447c5ea9deb6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1027 18:57:54.022679  357750 system_pods.go:89] "csi-hostpath-resizer-0" [38610a93-addc-4526-b959-7aa8963d68e7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1027 18:57:54.022685  357750 system_pods.go:89] "csi-hostpathplugin-jlszq" [3c831b0a-7336-491d-9c07-f8fb8692e0bf] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1027 18:57:54.022689  357750 system_pods.go:89] "etcd-addons-589824" [60f2cd63-082d-4ce7-9c01-6d7f6be68d2d] Running
	I1027 18:57:54.022695  357750 system_pods.go:89] "kindnet-4rz7d" [6c4e893b-3105-4baa-a073-e2364d1724cb] Running
	I1027 18:57:54.022699  357750 system_pods.go:89] "kube-apiserver-addons-589824" [9637af46-c973-4a7e-ad3d-7d9685db10fd] Running
	I1027 18:57:54.022704  357750 system_pods.go:89] "kube-controller-manager-addons-589824" [0339aca8-8d04-47ae-8947-9f8e7d261bc3] Running
	I1027 18:57:54.022710  357750 system_pods.go:89] "kube-ingress-dns-minikube" [fb9d7bfe-33a0-427f-a31b-c37973e40580] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1027 18:57:54.022713  357750 system_pods.go:89] "kube-proxy-77bv8" [8cdca916-4b76-4778-9aca-fd1e93ae4ed3] Running
	I1027 18:57:54.022717  357750 system_pods.go:89] "kube-scheduler-addons-589824" [2812a900-927e-4bed-9f4c-5f69d59f14b2] Running
	I1027 18:57:54.022721  357750 system_pods.go:89] "metrics-server-85b7d694d7-6mqmx" [1a22ca13-4aaa-4ac6-b5ad-df2b9ba87dfc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1027 18:57:54.022728  357750 system_pods.go:89] "nvidia-device-plugin-daemonset-5m5rl" [911fc5e9-aa0b-494e-8eff-0c513d2b6625] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1027 18:57:54.022735  357750 system_pods.go:89] "registry-6b586f9694-bvh6h" [3922e9b1-ef70-4fce-b650-f88d2755f9ab] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1027 18:57:54.022740  357750 system_pods.go:89] "registry-creds-764b6fb674-bmdlm" [a18b1d31-61dd-4c8e-864d-c77043f43d5c] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1027 18:57:54.022748  357750 system_pods.go:89] "registry-proxy-62t66" [05d41077-cfc6-442d-baee-0103823e1b16] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1027 18:57:54.022757  357750 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jx9vc" [05d8492e-9dd2-485b-a457-dc9625bb6a31] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:57:54.022762  357750 system_pods.go:89] "snapshot-controller-7d9fbc56b8-m2794" [fc10956f-4e9a-4732-aacf-d844aab7d64a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1027 18:57:54.022766  357750 system_pods.go:89] "storage-provisioner" [b33a6bd4-fbbc-4726-a6e9-0a5a03e9f7ad] Running
	I1027 18:57:54.022775  357750 system_pods.go:126] duration metric: took 689.206974ms to wait for k8s-apps to be running ...
	I1027 18:57:54.022786  357750 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 18:57:54.022835  357750 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 18:57:54.037575  357750 system_svc.go:56] duration metric: took 14.777169ms WaitForService to wait for kubelet
	I1027 18:57:54.037605  357750 kubeadm.go:586] duration metric: took 41.804080273s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 18:57:54.037622  357750 node_conditions.go:102] verifying NodePressure condition ...
	I1027 18:57:54.040731  357750 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1027 18:57:54.040756  357750 node_conditions.go:123] node cpu capacity is 8
	I1027 18:57:54.040770  357750 node_conditions.go:105] duration metric: took 3.142389ms to run NodePressure ...
	I1027 18:57:54.040782  357750 start.go:241] waiting for startup goroutines ...
	I1027 18:57:54.061523  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:54.326511  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:54.393951  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:54.394529  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:54.562824  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:54.828011  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:54.894087  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:54.894290  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:55.061964  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:55.327848  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:55.394629  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:55.394661  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:55.562713  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:55.827930  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:55.894370  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:55.894415  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:56.061593  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:56.327208  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:56.393669  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:56.393695  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:56.561772  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:56.827707  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:56.893877  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:56.893912  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:57.061841  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:57.327645  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:57.393700  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:57.394279  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:57.561555  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:57.827874  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:57.894264  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:57.894295  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:58.062644  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:58.327716  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:58.355596  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:57:58.393350  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:58.393459  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:58.561366  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:58.827849  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:58.894049  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:58.894777  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1027 18:57:59.034377  357750 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:59.034418  357750 retry.go:31] will retry after 25.886247674s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:57:59.062004  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:59.327258  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:59.394058  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:59.395723  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:57:59.562770  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:57:59.827849  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:57:59.893988  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:57:59.894430  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:00.061769  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:00.327527  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:00.394077  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:00.394185  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:00.561702  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:00.827126  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:00.893687  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:00.893720  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:01.110300  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:01.327058  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:01.427216  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:01.427324  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:01.561216  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:01.826895  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:01.893876  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:01.894023  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:02.060887  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:02.326627  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:02.394329  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:02.394453  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:02.561681  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:02.827570  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:02.893462  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:02.893550  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:03.061901  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:03.327485  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:03.428457  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:03.428483  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:03.561375  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:03.827303  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:03.893459  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:03.893646  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:04.061603  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:04.327734  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:04.393954  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:04.394009  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:04.561635  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:04.827335  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:04.893364  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:04.893478  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:05.061610  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:05.327350  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:05.394183  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:05.394598  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:05.561900  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:05.829247  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:05.893272  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:05.893302  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:06.061803  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:06.327999  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:06.394268  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:06.394274  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:06.561641  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:06.862940  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:07.017039  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:07.017193  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:07.144404  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:07.350090  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:07.393262  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:07.393697  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:07.562802  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:07.827652  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:07.894427  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:07.894474  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:08.063531  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:08.327269  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:08.393200  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:08.393271  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:08.561225  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:08.826569  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:08.893504  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:08.893585  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:09.062347  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:09.327439  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:09.394281  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:09.394608  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:09.562376  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:09.827162  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:09.893356  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:09.893616  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:10.061252  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:10.326716  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:10.393947  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:10.394102  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:10.561294  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:10.900254  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:10.900278  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:10.900453  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:11.061790  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:11.327482  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:11.428581  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:11.428614  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:11.561556  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:11.827856  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:11.894177  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:11.894284  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:12.061717  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:12.328187  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:12.392898  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:12.393044  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:12.560794  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:12.827940  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:12.893731  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:12.893887  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:13.061605  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:13.328879  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:13.396257  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:13.397328  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:13.562112  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:13.828817  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:13.894968  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:13.896332  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:14.062439  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:14.327173  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:14.393781  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:14.394261  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:14.561955  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:14.827534  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:14.894316  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:14.894597  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:15.062060  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:15.326747  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:15.393953  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:15.394175  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:15.562165  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:15.827276  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:15.894219  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:15.894296  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:16.061366  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:16.326997  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:16.393416  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:16.393595  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:16.561817  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:16.826914  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:16.894158  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:16.894276  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:17.061699  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:17.327719  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:17.394604  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:17.395202  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:17.561384  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:17.827630  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:17.893839  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:17.893861  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:18.061685  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:18.327693  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:18.393810  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:18.393828  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:18.561152  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:18.826383  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:18.893663  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:18.893709  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:19.061877  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:19.327523  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:19.393460  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:19.393459  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:19.562444  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:19.828795  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:19.893478  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:19.893593  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:20.062107  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:20.327420  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:20.393109  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:20.393172  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:20.561598  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:20.827734  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:20.894085  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:20.894128  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:21.061478  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:21.326806  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:21.394444  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:21.394497  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:21.561762  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:21.827743  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:21.893566  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:21.893622  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:22.061629  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:22.327676  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:22.394205  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:22.394336  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:22.561388  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:22.827248  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:22.893992  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:22.894067  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:23.061608  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:23.327681  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:23.428760  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:23.428856  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:23.560698  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:23.827792  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:23.928241  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:23.928440  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:24.062216  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:24.326597  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:24.393663  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:24.393831  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:24.561724  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:24.827349  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:24.921032  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1027 18:58:24.927917  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:24.928098  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:25.061328  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:25.329831  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:25.394301  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:25.395021  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:25.561105  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1027 18:58:25.637380  357750 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:25.637422  357750 retry.go:31] will retry after 32.598528911s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1027 18:58:25.827007  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:25.928241  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:25.928257  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:26.061032  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:26.326549  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:26.393518  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1027 18:58:26.393576  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:26.562498  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:26.827861  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:26.895439  357750 kapi.go:107] duration metric: took 1m13.005603908s to wait for kubernetes.io/minikube-addons=registry ...
	I1027 18:58:26.895557  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:27.063452  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:27.328323  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:27.393865  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:27.561098  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:27.826838  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:27.893595  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:28.061732  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:28.326965  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:28.393030  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:28.561609  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:28.827481  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:28.893676  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:29.062386  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:29.326999  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:29.393230  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:29.561952  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:29.827119  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:29.893577  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:30.076698  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:30.327180  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:30.393391  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:30.561569  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:30.827361  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:30.893321  357750 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1027 18:58:31.061418  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:31.327524  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:31.394605  357750 kapi.go:107] duration metric: took 1m17.504813395s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1027 18:58:31.561758  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:31.827696  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:32.061491  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:32.328391  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:32.561642  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:32.827598  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:33.061283  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:33.326862  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:33.561326  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:33.827204  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:34.062093  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:34.326575  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:34.561991  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:34.827165  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:35.061330  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:35.327059  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:35.561430  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1027 18:58:35.827436  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:36.063191  357750 kapi.go:107] duration metric: took 1m15.505512885s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1027 18:58:36.064991  357750 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-589824 cluster.
	I1027 18:58:36.066542  357750 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1027 18:58:36.068014  357750 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1027 18:58:36.327008  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:36.826491  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:37.327757  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:37.828484  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:38.328004  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:38.827672  357750 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1027 18:58:39.327684  357750 kapi.go:107] duration metric: took 1m25.004692059s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1027 18:58:58.236834  357750 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1027 18:58:58.798080  357750 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1027 18:58:58.798248  357750 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1027 18:58:58.800851  357750 out.go:179] * Enabled addons: storage-provisioner, nvidia-device-plugin, registry-creds, amd-gpu-device-plugin, ingress-dns, metrics-server, cloud-spanner, yakd, default-storageclass, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1027 18:58:58.802363  357750 addons.go:514] duration metric: took 1m46.56875005s for enable addons: enabled=[storage-provisioner nvidia-device-plugin registry-creds amd-gpu-device-plugin ingress-dns metrics-server cloud-spanner yakd default-storageclass volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1027 18:58:58.802415  357750 start.go:246] waiting for cluster config update ...
	I1027 18:58:58.802450  357750 start.go:255] writing updated cluster config ...
	I1027 18:58:58.802809  357750 ssh_runner.go:195] Run: rm -f paused
	I1027 18:58:58.807317  357750 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 18:58:58.811086  357750 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lz5j4" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:58.817707  357750 pod_ready.go:94] pod "coredns-66bc5c9577-lz5j4" is "Ready"
	I1027 18:58:58.817732  357750 pod_ready.go:86] duration metric: took 6.618901ms for pod "coredns-66bc5c9577-lz5j4" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:58.819708  357750 pod_ready.go:83] waiting for pod "etcd-addons-589824" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:58.823737  357750 pod_ready.go:94] pod "etcd-addons-589824" is "Ready"
	I1027 18:58:58.823775  357750 pod_ready.go:86] duration metric: took 4.040563ms for pod "etcd-addons-589824" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:58.825811  357750 pod_ready.go:83] waiting for pod "kube-apiserver-addons-589824" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:58.829755  357750 pod_ready.go:94] pod "kube-apiserver-addons-589824" is "Ready"
	I1027 18:58:58.829783  357750 pod_ready.go:86] duration metric: took 3.94738ms for pod "kube-apiserver-addons-589824" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:58.831775  357750 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-589824" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:59.211764  357750 pod_ready.go:94] pod "kube-controller-manager-addons-589824" is "Ready"
	I1027 18:58:59.211797  357750 pod_ready.go:86] duration metric: took 379.998654ms for pod "kube-controller-manager-addons-589824" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:59.411325  357750 pod_ready.go:83] waiting for pod "kube-proxy-77bv8" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:58:59.812397  357750 pod_ready.go:94] pod "kube-proxy-77bv8" is "Ready"
	I1027 18:58:59.812432  357750 pod_ready.go:86] duration metric: took 401.078542ms for pod "kube-proxy-77bv8" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:59:00.012528  357750 pod_ready.go:83] waiting for pod "kube-scheduler-addons-589824" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:59:00.412152  357750 pod_ready.go:94] pod "kube-scheduler-addons-589824" is "Ready"
	I1027 18:59:00.412190  357750 pod_ready.go:86] duration metric: took 399.633217ms for pod "kube-scheduler-addons-589824" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 18:59:00.412209  357750 pod_ready.go:40] duration metric: took 1.604854944s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 18:59:00.459189  357750 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 18:59:00.461488  357750 out.go:179] * Done! kubectl is now configured to use "addons-589824" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 27 18:59:06 addons-589824 crio[774]: time="2025-10-27T18:59:06.572513096Z" level=info msg="Removing container: 06b6d151df30173b83782de22b55709d7c0807ada6fa0b9e867fbd8f27f5b7e8" id=3d9b2fa2-542d-499c-b35c-b80d28ed915c name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 18:59:06 addons-589824 crio[774]: time="2025-10-27T18:59:06.579212459Z" level=info msg="Removed container 06b6d151df30173b83782de22b55709d7c0807ada6fa0b9e867fbd8f27f5b7e8: gcp-auth/gcp-auth-certs-create-ksfqn/create" id=3d9b2fa2-542d-499c-b35c-b80d28ed915c name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 18:59:06 addons-589824 crio[774]: time="2025-10-27T18:59:06.582031946Z" level=info msg="Stopping pod sandbox: fc821b7be208e6a506bcb739cb3d85464f05f95ea4cf6c0bd33da87d75853747" id=fbc43963-221b-47fd-8390-5564f8b8bc95 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 27 18:59:06 addons-589824 crio[774]: time="2025-10-27T18:59:06.582098076Z" level=info msg="Stopped pod sandbox (already stopped): fc821b7be208e6a506bcb739cb3d85464f05f95ea4cf6c0bd33da87d75853747" id=fbc43963-221b-47fd-8390-5564f8b8bc95 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 27 18:59:06 addons-589824 crio[774]: time="2025-10-27T18:59:06.582606667Z" level=info msg="Removing pod sandbox: fc821b7be208e6a506bcb739cb3d85464f05f95ea4cf6c0bd33da87d75853747" id=9b021030-1767-4d1f-b144-1e711b1caa17 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 27 18:59:06 addons-589824 crio[774]: time="2025-10-27T18:59:06.58572349Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 27 18:59:06 addons-589824 crio[774]: time="2025-10-27T18:59:06.585787292Z" level=info msg="Removed pod sandbox: fc821b7be208e6a506bcb739cb3d85464f05f95ea4cf6c0bd33da87d75853747" id=9b021030-1767-4d1f-b144-1e711b1caa17 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 27 18:59:06 addons-589824 crio[774]: time="2025-10-27T18:59:06.586294642Z" level=info msg="Stopping pod sandbox: 4005281ea1782bfb1b5ef2fa7ccdc6874fca028e314dcdcc3335e84038944d1d" id=faf0bce9-c16c-455f-912c-01b926360bc3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 27 18:59:06 addons-589824 crio[774]: time="2025-10-27T18:59:06.586354999Z" level=info msg="Stopped pod sandbox (already stopped): 4005281ea1782bfb1b5ef2fa7ccdc6874fca028e314dcdcc3335e84038944d1d" id=faf0bce9-c16c-455f-912c-01b926360bc3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 27 18:59:06 addons-589824 crio[774]: time="2025-10-27T18:59:06.586759472Z" level=info msg="Removing pod sandbox: 4005281ea1782bfb1b5ef2fa7ccdc6874fca028e314dcdcc3335e84038944d1d" id=20ae9223-9b60-4d70-9cd7-d4641f11d09a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 27 18:59:06 addons-589824 crio[774]: time="2025-10-27T18:59:06.589805594Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 27 18:59:06 addons-589824 crio[774]: time="2025-10-27T18:59:06.5898634Z" level=info msg="Removed pod sandbox: 4005281ea1782bfb1b5ef2fa7ccdc6874fca028e314dcdcc3335e84038944d1d" id=20ae9223-9b60-4d70-9cd7-d4641f11d09a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 27 18:59:10 addons-589824 crio[774]: time="2025-10-27T18:59:10.306700473Z" level=info msg="Running pod sandbox: default/nginx/POD" id=37ee8a35-de92-42de-9243-a88f2dee4ed3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 18:59:10 addons-589824 crio[774]: time="2025-10-27T18:59:10.306811633Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 18:59:10 addons-589824 crio[774]: time="2025-10-27T18:59:10.312975274Z" level=info msg="Got pod network &{Name:nginx Namespace:default ID:ef303f142b2df960a0ae7f8664fdc3d41342b704088dec1862f321151626ca32 UID:4e8f6ee2-441e-480b-93e3-44362001a683 NetNS:/var/run/netns/61ef174d-5de2-4cfd-991a-df14ec2e0253 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc002888230}] Aliases:map[]}"
	Oct 27 18:59:10 addons-589824 crio[774]: time="2025-10-27T18:59:10.31300625Z" level=info msg="Adding pod default_nginx to CNI network \"kindnet\" (type=ptp)"
	Oct 27 18:59:10 addons-589824 crio[774]: time="2025-10-27T18:59:10.324283894Z" level=info msg="Got pod network &{Name:nginx Namespace:default ID:ef303f142b2df960a0ae7f8664fdc3d41342b704088dec1862f321151626ca32 UID:4e8f6ee2-441e-480b-93e3-44362001a683 NetNS:/var/run/netns/61ef174d-5de2-4cfd-991a-df14ec2e0253 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc002888230}] Aliases:map[]}"
	Oct 27 18:59:10 addons-589824 crio[774]: time="2025-10-27T18:59:10.324807877Z" level=info msg="Checking pod default_nginx for CNI network kindnet (type=ptp)"
	Oct 27 18:59:10 addons-589824 crio[774]: time="2025-10-27T18:59:10.325788051Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 27 18:59:10 addons-589824 crio[774]: time="2025-10-27T18:59:10.32662131Z" level=info msg="Ran pod sandbox ef303f142b2df960a0ae7f8664fdc3d41342b704088dec1862f321151626ca32 with infra container: default/nginx/POD" id=37ee8a35-de92-42de-9243-a88f2dee4ed3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 18:59:10 addons-589824 crio[774]: time="2025-10-27T18:59:10.327914807Z" level=info msg="Checking image status: docker.io/nginx:alpine" id=c3562633-d972-4f25-8eae-e8610190d7fd name=/runtime.v1.ImageService/ImageStatus
	Oct 27 18:59:10 addons-589824 crio[774]: time="2025-10-27T18:59:10.328048061Z" level=info msg="Image docker.io/nginx:alpine not found" id=c3562633-d972-4f25-8eae-e8610190d7fd name=/runtime.v1.ImageService/ImageStatus
	Oct 27 18:59:10 addons-589824 crio[774]: time="2025-10-27T18:59:10.328081455Z" level=info msg="Neither image nor artfiact docker.io/nginx:alpine found" id=c3562633-d972-4f25-8eae-e8610190d7fd name=/runtime.v1.ImageService/ImageStatus
	Oct 27 18:59:10 addons-589824 crio[774]: time="2025-10-27T18:59:10.3288033Z" level=info msg="Pulling image: docker.io/nginx:alpine" id=450a9eb9-ffa4-4097-a3c4-5faa7198a9d4 name=/runtime.v1.ImageService/PullImage
	Oct 27 18:59:10 addons-589824 crio[774]: time="2025-10-27T18:59:10.33348239Z" level=info msg="Trying to access \"docker.io/library/nginx:alpine\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	87d9cc6838ad3       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          9 seconds ago        Running             busybox                                  0                   896ff7f479b31       busybox                                     default
	0a17a4745cc1a       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          32 seconds ago       Running             csi-snapshotter                          0                   3ff1c4a47f48e       csi-hostpathplugin-jlszq                    kube-system
	a30f678907200       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          33 seconds ago       Running             csi-provisioner                          0                   3ff1c4a47f48e       csi-hostpathplugin-jlszq                    kube-system
	db7343377b388       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            34 seconds ago       Running             liveness-probe                           0                   3ff1c4a47f48e       csi-hostpathplugin-jlszq                    kube-system
	71e53e748e01f       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           34 seconds ago       Running             hostpath                                 0                   3ff1c4a47f48e       csi-hostpathplugin-jlszq                    kube-system
	35b17f5ee8fcc       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 35 seconds ago       Running             gcp-auth                                 0                   b5010a03a9f28       gcp-auth-78565c9fb4-kxlcv                   gcp-auth
	56024f3c5df31       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                37 seconds ago       Running             node-driver-registrar                    0                   3ff1c4a47f48e       csi-hostpathplugin-jlszq                    kube-system
	3f9265ee73822       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            38 seconds ago       Running             gadget                                   0                   d36736f931f9f       gadget-vwv62                                gadget
	3d43e3819d86b       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             40 seconds ago       Running             controller                               0                   bfc3ba45a4d88       ingress-nginx-controller-675c5ddd98-kvnzw   ingress-nginx
	ef768854ff282       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              44 seconds ago       Running             registry-proxy                           0                   584cfb3fe1579       registry-proxy-62t66                        kube-system
	76e187a284766       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   46 seconds ago       Running             csi-external-health-monitor-controller   0                   3ff1c4a47f48e       csi-hostpathplugin-jlszq                    kube-system
	0c23d9067a021       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      47 seconds ago       Running             volume-snapshot-controller               0                   f4770ec7a0bc0       snapshot-controller-7d9fbc56b8-jx9vc        kube-system
	6feb37f12d4a3       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     47 seconds ago       Running             amd-gpu-device-plugin                    0                   d4345a36e61ed       amd-gpu-device-plugin-6nrwh                 kube-system
	fb6a38bfcaa08       08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2                                                                             49 seconds ago       Exited              patch                                    2                   030a7783d8ba6       ingress-nginx-admission-patch-l7t7k         ingress-nginx
	2dc898f8fa5b3       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             49 seconds ago       Running             csi-attacher                             0                   a4ffd4da5cb72       csi-hostpath-attacher-0                     kube-system
	27f1c94c3f573       nvcr.io/nvidia/k8s-device-plugin@sha256:ad155f1089b64673c75b2f39258f0791cbad6d3011419726ec605196981e1c32                                     50 seconds ago       Running             nvidia-device-plugin-ctr                 0                   0725d21aa9560       nvidia-device-plugin-daemonset-5m5rl        kube-system
	b7494b1ab076b       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      58 seconds ago       Running             volume-snapshot-controller               0                   9ee30ec4a8aba       snapshot-controller-7d9fbc56b8-m2794        kube-system
	4462c756941cb       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              59 seconds ago       Running             yakd                                     0                   d53d106603bb2       yakd-dashboard-5ff678cb9-m5mql              yakd-dashboard
	cfcad9faa243a       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   About a minute ago   Exited              create                                   0                   68107bfbb0309       ingress-nginx-admission-create-j8h7h        ingress-nginx
	2f642c7cbe909       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              About a minute ago   Running             csi-resizer                              0                   2616723d54001       csi-hostpath-resizer-0                      kube-system
	9fe3aa823f5d8       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             About a minute ago   Running             local-path-provisioner                   0                   430826c98a473       local-path-provisioner-648f6765c9-qkqkp     local-path-storage
	ca7a93241189c       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        About a minute ago   Running             metrics-server                           0                   5193d690f4c91       metrics-server-85b7d694d7-6mqmx             kube-system
	2095fff763068       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           About a minute ago   Running             registry                                 0                   e0005252edc64       registry-6b586f9694-bvh6h                   kube-system
	8b8b3dcbd1000       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               About a minute ago   Running             cloud-spanner-emulator                   0                   6e33a7c436845       cloud-spanner-emulator-86bd5cbb97-rt6dx     default
	eede6880efbc9       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               About a minute ago   Running             minikube-ingress-dns                     0                   b4a896ee94dee       kube-ingress-dns-minikube                   kube-system
	ba1ddd191addf       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   a2c466ed164d0       storage-provisioner                         kube-system
	abbe027d3dc3b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             About a minute ago   Running             coredns                                  0                   fb92d690a7a5a       coredns-66bc5c9577-lz5j4                    kube-system
	12e10d7e88fff       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             About a minute ago   Running             kube-proxy                               0                   93c2925316a16       kube-proxy-77bv8                            kube-system
	6d05a2b6be1fb       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   233fe93a3f9c0       kindnet-4rz7d                               kube-system
	c02f8fc8e6a73       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             2 minutes ago        Running             kube-apiserver                           0                   a6e9125590762       kube-apiserver-addons-589824                kube-system
	95468d8526bae       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             2 minutes ago        Running             kube-scheduler                           0                   f29cecb516462       kube-scheduler-addons-589824                kube-system
	81cd0a11514ab       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             2 minutes ago        Running             kube-controller-manager                  0                   cac3717bc1745       kube-controller-manager-addons-589824       kube-system
	f25d173d59b5b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             2 minutes ago        Running             etcd                                     0                   969beb9641f34       etcd-addons-589824                          kube-system
	
	
	==> coredns [abbe027d3dc3b813b338a56e8cabab82e03eb9b112b7b850abb79fefe6d06ad7] <==
	[INFO] 10.244.0.18:49823 - 36806 "AAAA IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.0034552s
	[INFO] 10.244.0.18:53583 - 49393 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.00008897s
	[INFO] 10.244.0.18:53583 - 48966 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000114013s
	[INFO] 10.244.0.18:60100 - 24319 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000053243s
	[INFO] 10.244.0.18:60100 - 24623 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000099247s
	[INFO] 10.244.0.18:51544 - 338 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000085898s
	[INFO] 10.244.0.18:51544 - 598 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000151923s
	[INFO] 10.244.0.18:42788 - 57489 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000184959s
	[INFO] 10.244.0.18:42788 - 57228 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000256069s
	[INFO] 10.244.0.22:60957 - 15532 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000216401s
	[INFO] 10.244.0.22:56914 - 49848 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000276743s
	[INFO] 10.244.0.22:56844 - 4126 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000148772s
	[INFO] 10.244.0.22:49512 - 33701 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000130161s
	[INFO] 10.244.0.22:56829 - 25765 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000137306s
	[INFO] 10.244.0.22:39838 - 62206 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000205132s
	[INFO] 10.244.0.22:49451 - 21640 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.006263356s
	[INFO] 10.244.0.22:46862 - 63874 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.006284813s
	[INFO] 10.244.0.22:53293 - 41549 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.005330548s
	[INFO] 10.244.0.22:60552 - 59972 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.005993489s
	[INFO] 10.244.0.22:54315 - 43099 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005123142s
	[INFO] 10.244.0.22:35223 - 22024 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006175629s
	[INFO] 10.244.0.22:38943 - 65517 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004099658s
	[INFO] 10.244.0.22:32982 - 64160 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00563234s
	[INFO] 10.244.0.22:48886 - 21891 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002490605s
	[INFO] 10.244.0.22:53042 - 55437 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002620466s
	
	
	==> describe nodes <==
	Name:               addons-589824
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-589824
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=addons-589824
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T18_57_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-589824
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-589824"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 18:57:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-589824
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 18:59:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 18:59:08 +0000   Mon, 27 Oct 2025 18:57:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 18:59:08 +0000   Mon, 27 Oct 2025 18:57:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 18:59:08 +0000   Mon, 27 Oct 2025 18:57:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 18:59:08 +0000   Mon, 27 Oct 2025 18:57:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-589824
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                015f8eae-8878-4d4d-8c23-64412d4db92c
	  Boot ID:                    811bd29c-e64e-4acc-9427-bab1f7caed93
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     cloud-spanner-emulator-86bd5cbb97-rt6dx      0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  gadget                      gadget-vwv62                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  gcp-auth                    gcp-auth-78565c9fb4-kxlcv                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-kvnzw    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         118s
	  kube-system                 amd-gpu-device-plugin-6nrwh                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 coredns-66bc5c9577-lz5j4                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     119s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 csi-hostpathplugin-jlszq                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 etcd-addons-589824                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m5s
	  kube-system                 kindnet-4rz7d                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m
	  kube-system                 kube-apiserver-addons-589824                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-controller-manager-addons-589824        200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-77bv8                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-scheduler-addons-589824                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 metrics-server-85b7d694d7-6mqmx              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         118s
	  kube-system                 nvidia-device-plugin-daemonset-5m5rl         0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 registry-6b586f9694-bvh6h                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 registry-creds-764b6fb674-bmdlm              0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 registry-proxy-62t66                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	  kube-system                 snapshot-controller-7d9fbc56b8-jx9vc         0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 snapshot-controller-7d9fbc56b8-m2794         0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  local-path-storage          local-path-provisioner-648f6765c9-qkqkp      0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-m5mql               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     118s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 118s                   kube-proxy       
	  Normal  Starting                 2m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m10s (x8 over 2m10s)  kubelet          Node addons-589824 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s (x8 over 2m10s)  kubelet          Node addons-589824 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s (x8 over 2m10s)  kubelet          Node addons-589824 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m5s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m5s                   kubelet          Node addons-589824 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s                   kubelet          Node addons-589824 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s                   kubelet          Node addons-589824 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m1s                   node-controller  Node addons-589824 event: Registered Node addons-589824 in Controller
	  Normal  NodeReady                78s                    kubelet          Node addons-589824 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.010329] IPv4: martian destination 127.0.0.11 from 10.244.1.3, dev veth5e6ea64f
	[Oct27 18:17] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 22 d6 1b bb 5b 98 08 06
	[  +0.000462] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ea 72 fd 22 91 c6 08 06
	[ +28.556463] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ea b6 b5 44 69 b8 08 06
	[  +0.049971] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 86 75 ca 86 f3 a5 08 06
	[Oct27 18:18] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 12 50 95 0e df 53 08 06
	[  +0.000508] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff a2 a0 25 17 89 8e 08 06
	[ +13.995369] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 66 86 ee d8 87 44 08 06
	[  +0.000522] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7a 6a 84 ac 1a de 08 06
	[  +3.126123] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c2 63 62 51 cd c2 08 06
	[  +0.000463] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 86 75 ca 86 f3 a5 08 06
	[  +0.485166] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 23 52 43 9a ba 08 06
	[  +0.000398] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 50 95 0e df 53 08 06
	
	
	==> etcd [f25d173d59b5ba978f27e915fc30ff6e02ab5bba952c2af598b464a59edc1987] <==
	{"level":"warn","ts":"2025-10-27T18:57:03.377364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:03.384670Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:03.391302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:03.402791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:03.409914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:03.416593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:03.463681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:14.775968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:40.870645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:40.877417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:40.898551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:57:40.905511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T18:58:06.992926Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"120.043517ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/ingress-nginx/ingress-nginx-admission\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T18:58:06.993020Z","caller":"traceutil/trace.go:172","msg":"trace[1554996425] range","detail":"{range_begin:/registry/secrets/ingress-nginx/ingress-nginx-admission; range_end:; response_count:0; response_revision:1028; }","duration":"120.155798ms","start":"2025-10-27T18:58:06.872848Z","end":"2025-10-27T18:58:06.993004Z","steps":["trace[1554996425] 'agreement among raft nodes before linearized reading'  (duration: 94.036969ms)","trace[1554996425] 'range keys from in-memory index tree'  (duration: 25.967966ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-27T18:58:06.993031Z","caller":"traceutil/trace.go:172","msg":"trace[1964619258] transaction","detail":"{read_only:false; response_revision:1029; number_of_response:1; }","duration":"126.830572ms","start":"2025-10-27T18:58:06.866182Z","end":"2025-10-27T18:58:06.993013Z","steps":["trace[1964619258] 'process raft request'  (duration: 100.764414ms)","trace[1964619258] 'compare'  (duration: 25.933927ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T18:58:07.015036Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.438632ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-10-27T18:58:07.015078Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.491309ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T18:58:07.015103Z","caller":"traceutil/trace.go:172","msg":"trace[1533825363] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1029; }","duration":"123.521397ms","start":"2025-10-27T18:58:06.891568Z","end":"2025-10-27T18:58:07.015089Z","steps":["trace[1533825363] 'agreement among raft nodes before linearized reading'  (duration: 123.386123ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T18:58:07.015146Z","caller":"traceutil/trace.go:172","msg":"trace[281230080] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1029; }","duration":"123.554779ms","start":"2025-10-27T18:58:06.891567Z","end":"2025-10-27T18:58:07.015122Z","steps":["trace[281230080] 'agreement among raft nodes before linearized reading'  (duration: 123.449214ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T18:58:07.015199Z","caller":"traceutil/trace.go:172","msg":"trace[935681801] transaction","detail":"{read_only:false; response_revision:1030; number_of_response:1; }","duration":"125.682407ms","start":"2025-10-27T18:58:06.889499Z","end":"2025-10-27T18:58:07.015182Z","steps":["trace[935681801] 'process raft request'  (duration: 125.498666ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T18:58:07.015239Z","caller":"traceutil/trace.go:172","msg":"trace[502819226] transaction","detail":"{read_only:false; response_revision:1031; number_of_response:1; }","duration":"122.092514ms","start":"2025-10-27T18:58:06.893092Z","end":"2025-10-27T18:58:07.015184Z","steps":["trace[502819226] 'process raft request'  (duration: 122.01453ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T18:58:07.142675Z","caller":"traceutil/trace.go:172","msg":"trace[2086540696] transaction","detail":"{read_only:false; response_revision:1033; number_of_response:1; }","duration":"123.049421ms","start":"2025-10-27T18:58:07.019602Z","end":"2025-10-27T18:58:07.142651Z","steps":["trace[2086540696] 'process raft request'  (duration: 100.272325ms)","trace[2086540696] 'compare'  (duration: 22.576854ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-27T18:58:07.142722Z","caller":"traceutil/trace.go:172","msg":"trace[26412430] transaction","detail":"{read_only:false; response_revision:1034; number_of_response:1; }","duration":"123.097313ms","start":"2025-10-27T18:58:07.019610Z","end":"2025-10-27T18:58:07.142707Z","steps":["trace[26412430] 'process raft request'  (duration: 122.985372ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T18:58:07.142703Z","caller":"traceutil/trace.go:172","msg":"trace[866901254] transaction","detail":"{read_only:false; response_revision:1035; number_of_response:1; }","duration":"121.909722ms","start":"2025-10-27T18:58:07.020779Z","end":"2025-10-27T18:58:07.142689Z","steps":["trace[866901254] 'process raft request'  (duration: 121.852603ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T18:58:07.347093Z","caller":"traceutil/trace.go:172","msg":"trace[443095031] transaction","detail":"{read_only:false; response_revision:1038; number_of_response:1; }","duration":"118.643352ms","start":"2025-10-27T18:58:07.228411Z","end":"2025-10-27T18:58:07.347054Z","steps":["trace[443095031] 'process raft request'  (duration: 72.528344ms)","trace[443095031] 'compare'  (duration: 45.946959ms)"],"step_count":2}
	
	
	==> gcp-auth [35b17f5ee8fcc21b55af114698fc6422309350b8004ce05ffbfa88cc4ddc1d83] <==
	2025/10/27 18:58:35 GCP Auth Webhook started!
	2025/10/27 18:59:00 Ready to marshal response ...
	2025/10/27 18:59:00 Ready to write response ...
	2025/10/27 18:59:00 Ready to marshal response ...
	2025/10/27 18:59:00 Ready to write response ...
	2025/10/27 18:59:01 Ready to marshal response ...
	2025/10/27 18:59:01 Ready to write response ...
	2025/10/27 18:59:09 Ready to marshal response ...
	2025/10/27 18:59:09 Ready to write response ...
	
	
	==> kernel <==
	 18:59:11 up  1:41,  0 user,  load average: 1.58, 1.01, 0.64
	Linux addons-589824 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6d05a2b6be1fb2b8475a215eb50681a592a20257978b9da0091741666c9fa5c6] <==
	I1027 18:57:12.673600       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 18:57:12.673841       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1027 18:57:42.584413       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1027 18:57:42.584413       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1027 18:57:42.652077       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1027 18:57:42.673638       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1027 18:57:44.174508       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 18:57:44.174549       1 metrics.go:72] Registering metrics
	I1027 18:57:44.174618       1 controller.go:711] "Syncing nftables rules"
	I1027 18:57:52.590160       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 18:57:52.590267       1 main.go:301] handling current node
	I1027 18:58:02.583885       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 18:58:02.583945       1 main.go:301] handling current node
	I1027 18:58:12.583170       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 18:58:12.583210       1 main.go:301] handling current node
	I1027 18:58:22.584041       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 18:58:22.584083       1 main.go:301] handling current node
	I1027 18:58:32.583266       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 18:58:32.583315       1 main.go:301] handling current node
	I1027 18:58:42.583250       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 18:58:42.583295       1 main.go:301] handling current node
	I1027 18:58:52.583948       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 18:58:52.584001       1 main.go:301] handling current node
	I1027 18:59:02.584099       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 18:59:02.584174       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c02f8fc8e6a7392b824780b7cf27bac4f0cee905aafadcc2295bf2775ce85316] <==
	W1027 18:57:14.775908       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1027 18:57:20.492575       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.105.137.58"}
	W1027 18:57:40.870571       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1027 18:57:40.877372       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1027 18:57:40.898441       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1027 18:57:40.905516       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1027 18:57:53.161858       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.137.58:443: connect: connection refused
	E1027 18:57:53.161906       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.137.58:443: connect: connection refused" logger="UnhandledError"
	W1027 18:57:53.161906       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.137.58:443: connect: connection refused
	E1027 18:57:53.161937       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.137.58:443: connect: connection refused" logger="UnhandledError"
	W1027 18:57:53.184361       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.137.58:443: connect: connection refused
	E1027 18:57:53.184402       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.137.58:443: connect: connection refused" logger="UnhandledError"
	W1027 18:57:53.190689       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.137.58:443: connect: connection refused
	E1027 18:57:53.190731       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.137.58:443: connect: connection refused" logger="UnhandledError"
	W1027 18:58:07.214814       1 handler_proxy.go:99] no RequestInfo found in the context
	E1027 18:58:07.214920       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1027 18:58:07.214986       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.110.96.153:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.110.96.153:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.110.96.153:443: connect: connection refused" logger="UnhandledError"
	I1027 18:58:07.225091       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1027 18:59:09.154764       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43684: use of closed network connection
	E1027 18:59:09.307799       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:43704: use of closed network connection
	I1027 18:59:09.826826       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1027 18:59:10.052543       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.101.92"}
	
	
	==> kube-controller-manager [81cd0a11514aba345e443fd708bb0a4b65a29f336aec8643a57037ceeda8aefe] <==
	I1027 18:57:10.849920       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 18:57:10.849700       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1027 18:57:10.849989       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1027 18:57:10.853953       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1027 18:57:10.854073       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1027 18:57:10.854124       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1027 18:57:10.854189       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1027 18:57:10.854202       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1027 18:57:10.854209       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1027 18:57:10.855289       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1027 18:57:10.857581       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 18:57:10.859949       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 18:57:10.861328       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-589824" podCIDRs=["10.244.0.0/24"]
	I1027 18:57:10.864429       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1027 18:57:10.871834       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 18:57:10.880550       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1027 18:57:13.491484       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1027 18:57:40.864499       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1027 18:57:40.864660       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1027 18:57:40.864710       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1027 18:57:40.888507       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1027 18:57:40.892444       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1027 18:57:40.965959       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 18:57:40.993534       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 18:57:55.806649       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [12e10d7e88fff07d51f12a561be95b0933cdc57cc59e0f478fe8964c53f1806b] <==
	I1027 18:57:12.168054       1 server_linux.go:53] "Using iptables proxy"
	I1027 18:57:12.258929       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 18:57:12.362228       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 18:57:12.362789       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1027 18:57:12.362910       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 18:57:12.526041       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 18:57:12.526210       1 server_linux.go:132] "Using iptables Proxier"
	I1027 18:57:12.536106       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 18:57:12.536692       1 server.go:527] "Version info" version="v1.34.1"
	I1027 18:57:12.537167       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 18:57:12.539945       1 config.go:200] "Starting service config controller"
	I1027 18:57:12.542650       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 18:57:12.542172       1 config.go:106] "Starting endpoint slice config controller"
	I1027 18:57:12.542708       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 18:57:12.542205       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 18:57:12.542721       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 18:57:12.542399       1 config.go:309] "Starting node config controller"
	I1027 18:57:12.542732       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 18:57:12.542738       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 18:57:12.642805       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 18:57:12.643041       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 18:57:12.643252       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [95468d8526baeb9ed07c582a77c3593017052fb17f3ce84741a67f91794b7400] <==
	E1027 18:57:03.876200       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 18:57:03.876479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 18:57:03.876527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 18:57:03.876576       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 18:57:03.876622       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 18:57:03.876671       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 18:57:03.876715       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 18:57:03.876766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 18:57:03.876810       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 18:57:03.876944       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 18:57:03.877033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 18:57:03.878529       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 18:57:03.878709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1027 18:57:03.879478       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 18:57:04.703618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 18:57:04.814766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 18:57:04.831343       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 18:57:04.838049       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 18:57:04.839062       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 18:57:04.909702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 18:57:04.914847       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 18:57:05.048524       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 18:57:05.088274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1027 18:57:05.152497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1027 18:57:08.270065       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 18:58:26 addons-589824 kubelet[1300]: I1027 18:58:26.895150    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-62t66" podStartSLOduration=1.381529808 podStartE2EDuration="33.895108691s" podCreationTimestamp="2025-10-27 18:57:53 +0000 UTC" firstStartedPulling="2025-10-27 18:57:53.644549412 +0000 UTC m=+47.166870753" lastFinishedPulling="2025-10-27 18:58:26.158128306 +0000 UTC m=+79.680449636" observedRunningTime="2025-10-27 18:58:26.89353531 +0000 UTC m=+80.415856658" watchObservedRunningTime="2025-10-27 18:58:26.895108691 +0000 UTC m=+80.417430039"
	Oct 27 18:58:27 addons-589824 kubelet[1300]: I1027 18:58:27.565580    1300 scope.go:117] "RemoveContainer" containerID="4230f73e9fac20ed3b8263d4cf87d3a8712034118f9aebc28912a977fc3d9cf7"
	Oct 27 18:58:27 addons-589824 kubelet[1300]: I1027 18:58:27.883447    1300 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-62t66" secret="" err="secret \"gcp-auth\" not found"
	Oct 27 18:58:30 addons-589824 kubelet[1300]: I1027 18:58:30.896468    1300 scope.go:117] "RemoveContainer" containerID="4230f73e9fac20ed3b8263d4cf87d3a8712034118f9aebc28912a977fc3d9cf7"
	Oct 27 18:58:31 addons-589824 kubelet[1300]: I1027 18:58:31.920030    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-kvnzw" podStartSLOduration=57.886142118 podStartE2EDuration="1m18.920006369s" podCreationTimestamp="2025-10-27 18:57:13 +0000 UTC" firstStartedPulling="2025-10-27 18:58:09.469449006 +0000 UTC m=+62.991770342" lastFinishedPulling="2025-10-27 18:58:30.503313253 +0000 UTC m=+84.025634593" observedRunningTime="2025-10-27 18:58:30.921019047 +0000 UTC m=+84.443340395" watchObservedRunningTime="2025-10-27 18:58:31.920006369 +0000 UTC m=+85.442327717"
	Oct 27 18:58:32 addons-589824 kubelet[1300]: I1027 18:58:32.111640    1300 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbgqs\" (UniqueName: \"kubernetes.io/projected/fbdcf491-043d-4396-a4f2-711c69447ba5-kube-api-access-lbgqs\") pod \"fbdcf491-043d-4396-a4f2-711c69447ba5\" (UID: \"fbdcf491-043d-4396-a4f2-711c69447ba5\") "
	Oct 27 18:58:32 addons-589824 kubelet[1300]: I1027 18:58:32.114437    1300 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbdcf491-043d-4396-a4f2-711c69447ba5-kube-api-access-lbgqs" (OuterVolumeSpecName: "kube-api-access-lbgqs") pod "fbdcf491-043d-4396-a4f2-711c69447ba5" (UID: "fbdcf491-043d-4396-a4f2-711c69447ba5"). InnerVolumeSpecName "kube-api-access-lbgqs". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 27 18:58:32 addons-589824 kubelet[1300]: I1027 18:58:32.212861    1300 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lbgqs\" (UniqueName: \"kubernetes.io/projected/fbdcf491-043d-4396-a4f2-711c69447ba5-kube-api-access-lbgqs\") on node \"addons-589824\" DevicePath \"\""
	Oct 27 18:58:32 addons-589824 kubelet[1300]: I1027 18:58:32.910706    1300 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4005281ea1782bfb1b5ef2fa7ccdc6874fca028e314dcdcc3335e84038944d1d"
	Oct 27 18:58:32 addons-589824 kubelet[1300]: I1027 18:58:32.929055    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-vwv62" podStartSLOduration=66.557416553 podStartE2EDuration="1m19.929031141s" podCreationTimestamp="2025-10-27 18:57:13 +0000 UTC" firstStartedPulling="2025-10-27 18:58:19.444053949 +0000 UTC m=+72.966375290" lastFinishedPulling="2025-10-27 18:58:32.815668538 +0000 UTC m=+86.337989878" observedRunningTime="2025-10-27 18:58:32.927812543 +0000 UTC m=+86.450133891" watchObservedRunningTime="2025-10-27 18:58:32.929031141 +0000 UTC m=+86.451352488"
	Oct 27 18:58:36 addons-589824 kubelet[1300]: I1027 18:58:36.620685    1300 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Oct 27 18:58:36 addons-589824 kubelet[1300]: I1027 18:58:36.620751    1300 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Oct 27 18:58:37 addons-589824 kubelet[1300]: I1027 18:58:37.973626    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-kxlcv" podStartSLOduration=68.762720498 podStartE2EDuration="1m17.973601827s" podCreationTimestamp="2025-10-27 18:57:20 +0000 UTC" firstStartedPulling="2025-10-27 18:58:26.08441244 +0000 UTC m=+79.606733771" lastFinishedPulling="2025-10-27 18:58:35.295293773 +0000 UTC m=+88.817615100" observedRunningTime="2025-10-27 18:58:35.942810847 +0000 UTC m=+89.465132195" watchObservedRunningTime="2025-10-27 18:58:37.973601827 +0000 UTC m=+91.495923178"
	Oct 27 18:58:39 addons-589824 kubelet[1300]: I1027 18:58:39.053520    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-jlszq" podStartSLOduration=1.1751493179999999 podStartE2EDuration="46.053483061s" podCreationTimestamp="2025-10-27 18:57:53 +0000 UTC" firstStartedPulling="2025-10-27 18:57:53.621199242 +0000 UTC m=+47.143520581" lastFinishedPulling="2025-10-27 18:58:38.499532995 +0000 UTC m=+92.021854324" observedRunningTime="2025-10-27 18:58:38.971026379 +0000 UTC m=+92.493347750" watchObservedRunningTime="2025-10-27 18:58:39.053483061 +0000 UTC m=+92.575804408"
	Oct 27 18:58:52 addons-589824 kubelet[1300]: I1027 18:58:52.568407    1300 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1819b07e-1565-434e-80bc-cee407cd774f" path="/var/lib/kubelet/pods/1819b07e-1565-434e-80bc-cee407cd774f/volumes"
	Oct 27 18:58:57 addons-589824 kubelet[1300]: E1027 18:58:57.098692    1300 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 27 18:58:57 addons-589824 kubelet[1300]: E1027 18:58:57.098834    1300 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a18b1d31-61dd-4c8e-864d-c77043f43d5c-gcr-creds podName:a18b1d31-61dd-4c8e-864d-c77043f43d5c nodeName:}" failed. No retries permitted until 2025-10-27 19:00:01.098808254 +0000 UTC m=+174.621129604 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/a18b1d31-61dd-4c8e-864d-c77043f43d5c-gcr-creds") pod "registry-creds-764b6fb674-bmdlm" (UID: "a18b1d31-61dd-4c8e-864d-c77043f43d5c") : secret "registry-creds-gcr" not found
	Oct 27 18:59:01 addons-589824 kubelet[1300]: I1027 18:59:01.030105    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgwwn\" (UniqueName: \"kubernetes.io/projected/dc1de28b-fce1-4ef6-a84d-5048ef8d2018-kube-api-access-tgwwn\") pod \"busybox\" (UID: \"dc1de28b-fce1-4ef6-a84d-5048ef8d2018\") " pod="default/busybox"
	Oct 27 18:59:01 addons-589824 kubelet[1300]: I1027 18:59:01.030239    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/dc1de28b-fce1-4ef6-a84d-5048ef8d2018-gcp-creds\") pod \"busybox\" (UID: \"dc1de28b-fce1-4ef6-a84d-5048ef8d2018\") " pod="default/busybox"
	Oct 27 18:59:02 addons-589824 kubelet[1300]: I1027 18:59:02.073461    1300 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.447806413 podStartE2EDuration="2.073437797s" podCreationTimestamp="2025-10-27 18:59:00 +0000 UTC" firstStartedPulling="2025-10-27 18:59:01.330702808 +0000 UTC m=+114.853024135" lastFinishedPulling="2025-10-27 18:59:01.956334185 +0000 UTC m=+115.478655519" observedRunningTime="2025-10-27 18:59:02.072298063 +0000 UTC m=+115.594619411" watchObservedRunningTime="2025-10-27 18:59:02.073437797 +0000 UTC m=+115.595759146"
	Oct 27 18:59:02 addons-589824 kubelet[1300]: I1027 18:59:02.568076    1300 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbdcf491-043d-4396-a4f2-711c69447ba5" path="/var/lib/kubelet/pods/fbdcf491-043d-4396-a4f2-711c69447ba5/volumes"
	Oct 27 18:59:06 addons-589824 kubelet[1300]: I1027 18:59:06.559855    1300 scope.go:117] "RemoveContainer" containerID="f688098927c32ff0aad0204e72b4c3ce38c74aea4b948c4ff24de44d2545452a"
	Oct 27 18:59:06 addons-589824 kubelet[1300]: I1027 18:59:06.571234    1300 scope.go:117] "RemoveContainer" containerID="06b6d151df30173b83782de22b55709d7c0807ada6fa0b9e867fbd8f27f5b7e8"
	Oct 27 18:59:10 addons-589824 kubelet[1300]: I1027 18:59:10.098527    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc6tt\" (UniqueName: \"kubernetes.io/projected/4e8f6ee2-441e-480b-93e3-44362001a683-kube-api-access-rc6tt\") pod \"nginx\" (UID: \"4e8f6ee2-441e-480b-93e3-44362001a683\") " pod="default/nginx"
	Oct 27 18:59:10 addons-589824 kubelet[1300]: I1027 18:59:10.098593    1300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/4e8f6ee2-441e-480b-93e3-44362001a683-gcp-creds\") pod \"nginx\" (UID: \"4e8f6ee2-441e-480b-93e3-44362001a683\") " pod="default/nginx"
	
	
	==> storage-provisioner [ba1ddd191addfbafb743bfd31989a110bd5b0f58f7479075c129e528745e7798] <==
	W1027 18:58:45.934073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:58:47.937657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:58:47.942443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:58:49.945280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:58:49.951005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:58:51.954767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:58:51.959349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:58:53.963214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:58:53.967414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:58:55.971415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:58:55.980395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:58:57.983069       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:58:57.988472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:58:59.992192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:58:59.996840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:59:02.000076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:59:02.004784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:59:04.009277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:59:04.014473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:59:06.017635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:59:06.021907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:59:08.024726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:59:08.028754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:59:10.032283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 18:59:10.043833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-589824 -n addons-589824
helpers_test.go:269: (dbg) Run:  kubectl --context addons-589824 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: nginx ingress-nginx-admission-create-j8h7h ingress-nginx-admission-patch-l7t7k registry-creds-764b6fb674-bmdlm
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-589824 describe pod nginx ingress-nginx-admission-create-j8h7h ingress-nginx-admission-patch-l7t7k registry-creds-764b6fb674-bmdlm
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-589824 describe pod nginx ingress-nginx-admission-create-j8h7h ingress-nginx-admission-patch-l7t7k registry-creds-764b6fb674-bmdlm: exit status 1 (89.028774ms)

                                                
                                                
-- stdout --
	Name:             nginx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-589824/192.168.49.2
	Start Time:       Mon, 27 Oct 2025 18:59:10 +0000
	Labels:           run=nginx
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  nginx:
	    Container ID:   
	    Image:          docker.io/nginx:alpine
	    Image ID:       
	    Port:           80/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rc6tt (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rc6tt:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/nginx to addons-589824
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/nginx:alpine"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-j8h7h" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-l7t7k" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-bmdlm" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-589824 describe pod nginx ingress-nginx-admission-create-j8h7h ingress-nginx-admission-patch-l7t7k registry-creds-764b6fb674-bmdlm: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-589824 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-589824 addons disable headlamp --alsologtostderr -v=1: exit status 11 (291.562107ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 18:59:12.314428  367528 out.go:360] Setting OutFile to fd 1 ...
	I1027 18:59:12.314757  367528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:59:12.314770  367528 out.go:374] Setting ErrFile to fd 2...
	I1027 18:59:12.314776  367528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:59:12.315088  367528 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 18:59:12.315522  367528 mustload.go:65] Loading cluster: addons-589824
	I1027 18:59:12.316026  367528 config.go:182] Loaded profile config "addons-589824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:59:12.316049  367528 addons.go:606] checking whether the cluster is paused
	I1027 18:59:12.316193  367528 config.go:182] Loaded profile config "addons-589824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:59:12.316219  367528 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:59:12.316772  367528 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:59:12.338949  367528 ssh_runner.go:195] Run: systemctl --version
	I1027 18:59:12.339039  367528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:59:12.362738  367528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:59:12.469882  367528 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 18:59:12.469966  367528 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 18:59:12.501650  367528 cri.go:89] found id: "0a17a4745cc1a6104ea6432d9fd60dac6e6abe764b5d1330d69426fa0b74a6ab"
	I1027 18:59:12.501693  367528 cri.go:89] found id: "a30f678907200483df6ff7630d767bc8daa14ce81d7f9088b61ad45ee3d0afab"
	I1027 18:59:12.501698  367528 cri.go:89] found id: "db7343377b38897cf4a8cf603f6e486663fecd5587924e1ed818db6d54bdcce6"
	I1027 18:59:12.501702  367528 cri.go:89] found id: "71e53e748e01fc8c91ffa4fb8b7865bea26bcbe65dcba958949295c6f0037da7"
	I1027 18:59:12.501706  367528 cri.go:89] found id: "56024f3c5df317e559a2fc01d91706e2a21e755612591d33569756c8b235a739"
	I1027 18:59:12.501711  367528 cri.go:89] found id: "ef768854ff28223563c69a32d2834fab10262b7e6a6963c625600582d59b9e51"
	I1027 18:59:12.501715  367528 cri.go:89] found id: "76e187a2847661d9eb59daefd89617bc458e7238cd87c5b6b4e6c6f1884d4826"
	I1027 18:59:12.501719  367528 cri.go:89] found id: "0c23d9067a021958f6e78dae17e3e314bb8f01a59a277d6d231a1c91ac243402"
	I1027 18:59:12.501723  367528 cri.go:89] found id: "6feb37f12d4a362a4be9862cfb4d525092b27f5c8806b5fe7f3e6992e40865b1"
	I1027 18:59:12.501738  367528 cri.go:89] found id: "2dc898f8fa5b3f56f21afaa0584bf9b0ee67ad474e08c141d382bf6352ffb103"
	I1027 18:59:12.501742  367528 cri.go:89] found id: "27f1c94c3f5736bca109359ef14c6315dca30f3a92e432a313912785f638d339"
	I1027 18:59:12.501746  367528 cri.go:89] found id: "b7494b1ab076bec5211fe9aa45d869fd06dce709b51652f81a21756c0087c5dc"
	I1027 18:59:12.501750  367528 cri.go:89] found id: "2f642c7cbe9094287b843be457ec991af2d6a4e3a7c89d0cef2628b88a0df390"
	I1027 18:59:12.501754  367528 cri.go:89] found id: "ca7a93241189c56d1808a8b7fb428d8057429bed2f6554b65716f5aeecd49b88"
	I1027 18:59:12.501757  367528 cri.go:89] found id: "2095fff76306861533792ed7f54dec0997d67f3656557a857ff7af3b00429cda"
	I1027 18:59:12.501778  367528 cri.go:89] found id: "eede6880efbc9e505b955efd78f6cc85e44d1edb5f142fe3df44034a4341a14f"
	I1027 18:59:12.501790  367528 cri.go:89] found id: "ba1ddd191addfbafb743bfd31989a110bd5b0f58f7479075c129e528745e7798"
	I1027 18:59:12.501798  367528 cri.go:89] found id: "abbe027d3dc3b813b338a56e8cabab82e03eb9b112b7b850abb79fefe6d06ad7"
	I1027 18:59:12.501802  367528 cri.go:89] found id: "12e10d7e88fff07d51f12a561be95b0933cdc57cc59e0f478fe8964c53f1806b"
	I1027 18:59:12.501806  367528 cri.go:89] found id: "6d05a2b6be1fb2b8475a215eb50681a592a20257978b9da0091741666c9fa5c6"
	I1027 18:59:12.501810  367528 cri.go:89] found id: "c02f8fc8e6a7392b824780b7cf27bac4f0cee905aafadcc2295bf2775ce85316"
	I1027 18:59:12.501814  367528 cri.go:89] found id: "95468d8526baeb9ed07c582a77c3593017052fb17f3ce84741a67f91794b7400"
	I1027 18:59:12.501818  367528 cri.go:89] found id: "81cd0a11514aba345e443fd708bb0a4b65a29f336aec8643a57037ceeda8aefe"
	I1027 18:59:12.501822  367528 cri.go:89] found id: "f25d173d59b5ba978f27e915fc30ff6e02ab5bba952c2af598b464a59edc1987"
	I1027 18:59:12.501826  367528 cri.go:89] found id: ""
	I1027 18:59:12.501893  367528 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 18:59:12.518794  367528 out.go:203] 
	W1027 18:59:12.520246  367528 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T18:59:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T18:59:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 18:59:12.520272  367528 out.go:285] * 
	* 
	W1027 18:59:12.524284  367528 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 18:59:12.525753  367528 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-589824 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.96s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.29s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-rt6dx" [c1bfb413-0358-489e-9500-98775d7fcb5e] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003745228s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-589824 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-589824 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (279.536715ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 18:59:27.690564  368992 out.go:360] Setting OutFile to fd 1 ...
	I1027 18:59:27.690887  368992 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:59:27.690899  368992 out.go:374] Setting ErrFile to fd 2...
	I1027 18:59:27.690903  368992 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:59:27.691122  368992 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 18:59:27.691429  368992 mustload.go:65] Loading cluster: addons-589824
	I1027 18:59:27.691854  368992 config.go:182] Loaded profile config "addons-589824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:59:27.691880  368992 addons.go:606] checking whether the cluster is paused
	I1027 18:59:27.691994  368992 config.go:182] Loaded profile config "addons-589824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:59:27.692015  368992 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:59:27.692480  368992 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:59:27.713638  368992 ssh_runner.go:195] Run: systemctl --version
	I1027 18:59:27.713690  368992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:59:27.733717  368992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:59:27.835297  368992 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 18:59:27.835397  368992 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 18:59:27.870372  368992 cri.go:89] found id: "0a17a4745cc1a6104ea6432d9fd60dac6e6abe764b5d1330d69426fa0b74a6ab"
	I1027 18:59:27.870414  368992 cri.go:89] found id: "a30f678907200483df6ff7630d767bc8daa14ce81d7f9088b61ad45ee3d0afab"
	I1027 18:59:27.870421  368992 cri.go:89] found id: "db7343377b38897cf4a8cf603f6e486663fecd5587924e1ed818db6d54bdcce6"
	I1027 18:59:27.870426  368992 cri.go:89] found id: "71e53e748e01fc8c91ffa4fb8b7865bea26bcbe65dcba958949295c6f0037da7"
	I1027 18:59:27.870430  368992 cri.go:89] found id: "56024f3c5df317e559a2fc01d91706e2a21e755612591d33569756c8b235a739"
	I1027 18:59:27.870435  368992 cri.go:89] found id: "ef768854ff28223563c69a32d2834fab10262b7e6a6963c625600582d59b9e51"
	I1027 18:59:27.870439  368992 cri.go:89] found id: "76e187a2847661d9eb59daefd89617bc458e7238cd87c5b6b4e6c6f1884d4826"
	I1027 18:59:27.870443  368992 cri.go:89] found id: "0c23d9067a021958f6e78dae17e3e314bb8f01a59a277d6d231a1c91ac243402"
	I1027 18:59:27.870448  368992 cri.go:89] found id: "6feb37f12d4a362a4be9862cfb4d525092b27f5c8806b5fe7f3e6992e40865b1"
	I1027 18:59:27.870522  368992 cri.go:89] found id: "2dc898f8fa5b3f56f21afaa0584bf9b0ee67ad474e08c141d382bf6352ffb103"
	I1027 18:59:27.870538  368992 cri.go:89] found id: "27f1c94c3f5736bca109359ef14c6315dca30f3a92e432a313912785f638d339"
	I1027 18:59:27.870542  368992 cri.go:89] found id: "b7494b1ab076bec5211fe9aa45d869fd06dce709b51652f81a21756c0087c5dc"
	I1027 18:59:27.870547  368992 cri.go:89] found id: "2f642c7cbe9094287b843be457ec991af2d6a4e3a7c89d0cef2628b88a0df390"
	I1027 18:59:27.870573  368992 cri.go:89] found id: "ca7a93241189c56d1808a8b7fb428d8057429bed2f6554b65716f5aeecd49b88"
	I1027 18:59:27.870582  368992 cri.go:89] found id: "2095fff76306861533792ed7f54dec0997d67f3656557a857ff7af3b00429cda"
	I1027 18:59:27.870590  368992 cri.go:89] found id: "eede6880efbc9e505b955efd78f6cc85e44d1edb5f142fe3df44034a4341a14f"
	I1027 18:59:27.870594  368992 cri.go:89] found id: "ba1ddd191addfbafb743bfd31989a110bd5b0f58f7479075c129e528745e7798"
	I1027 18:59:27.870599  368992 cri.go:89] found id: "abbe027d3dc3b813b338a56e8cabab82e03eb9b112b7b850abb79fefe6d06ad7"
	I1027 18:59:27.870602  368992 cri.go:89] found id: "12e10d7e88fff07d51f12a561be95b0933cdc57cc59e0f478fe8964c53f1806b"
	I1027 18:59:27.870607  368992 cri.go:89] found id: "6d05a2b6be1fb2b8475a215eb50681a592a20257978b9da0091741666c9fa5c6"
	I1027 18:59:27.870612  368992 cri.go:89] found id: "c02f8fc8e6a7392b824780b7cf27bac4f0cee905aafadcc2295bf2775ce85316"
	I1027 18:59:27.870618  368992 cri.go:89] found id: "95468d8526baeb9ed07c582a77c3593017052fb17f3ce84741a67f91794b7400"
	I1027 18:59:27.870625  368992 cri.go:89] found id: "81cd0a11514aba345e443fd708bb0a4b65a29f336aec8643a57037ceeda8aefe"
	I1027 18:59:27.870630  368992 cri.go:89] found id: "f25d173d59b5ba978f27e915fc30ff6e02ab5bba952c2af598b464a59edc1987"
	I1027 18:59:27.870637  368992 cri.go:89] found id: ""
	I1027 18:59:27.870694  368992 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 18:59:27.887598  368992 out.go:203] 
	W1027 18:59:27.889041  368992 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T18:59:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T18:59:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 18:59:27.889062  368992 out.go:285] * 
	* 
	W1027 18:59:27.894228  368992 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 18:59:27.895970  368992 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-589824 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.29s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.17s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-589824 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-589824 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-589824 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-589824 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-589824 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-589824 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-589824 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [e67c5cca-602d-458b-b5e9-976f09fb9b49] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [e67c5cca-602d-458b-b5e9-976f09fb9b49] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [e67c5cca-602d-458b-b5e9-976f09fb9b49] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003439598s
addons_test.go:967: (dbg) Run:  kubectl --context addons-589824 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-589824 ssh "cat /opt/local-path-provisioner/pvc-d2b921e4-c965-436a-9594-13b4f6318e7a_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-589824 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-589824 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-589824 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-589824 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (275.752986ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 18:59:28.116736  369169 out.go:360] Setting OutFile to fd 1 ...
	I1027 18:59:28.117059  369169 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:59:28.117070  369169 out.go:374] Setting ErrFile to fd 2...
	I1027 18:59:28.117075  369169 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:59:28.117322  369169 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 18:59:28.117612  369169 mustload.go:65] Loading cluster: addons-589824
	I1027 18:59:28.117957  369169 config.go:182] Loaded profile config "addons-589824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:59:28.117972  369169 addons.go:606] checking whether the cluster is paused
	I1027 18:59:28.118052  369169 config.go:182] Loaded profile config "addons-589824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:59:28.118069  369169 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:59:28.118454  369169 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:59:28.137611  369169 ssh_runner.go:195] Run: systemctl --version
	I1027 18:59:28.137668  369169 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:59:28.158230  369169 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:59:28.259323  369169 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 18:59:28.259423  369169 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 18:59:28.292346  369169 cri.go:89] found id: "0a17a4745cc1a6104ea6432d9fd60dac6e6abe764b5d1330d69426fa0b74a6ab"
	I1027 18:59:28.292367  369169 cri.go:89] found id: "a30f678907200483df6ff7630d767bc8daa14ce81d7f9088b61ad45ee3d0afab"
	I1027 18:59:28.292371  369169 cri.go:89] found id: "db7343377b38897cf4a8cf603f6e486663fecd5587924e1ed818db6d54bdcce6"
	I1027 18:59:28.292374  369169 cri.go:89] found id: "71e53e748e01fc8c91ffa4fb8b7865bea26bcbe65dcba958949295c6f0037da7"
	I1027 18:59:28.292377  369169 cri.go:89] found id: "56024f3c5df317e559a2fc01d91706e2a21e755612591d33569756c8b235a739"
	I1027 18:59:28.292382  369169 cri.go:89] found id: "ef768854ff28223563c69a32d2834fab10262b7e6a6963c625600582d59b9e51"
	I1027 18:59:28.292385  369169 cri.go:89] found id: "76e187a2847661d9eb59daefd89617bc458e7238cd87c5b6b4e6c6f1884d4826"
	I1027 18:59:28.292388  369169 cri.go:89] found id: "0c23d9067a021958f6e78dae17e3e314bb8f01a59a277d6d231a1c91ac243402"
	I1027 18:59:28.292391  369169 cri.go:89] found id: "6feb37f12d4a362a4be9862cfb4d525092b27f5c8806b5fe7f3e6992e40865b1"
	I1027 18:59:28.292400  369169 cri.go:89] found id: "2dc898f8fa5b3f56f21afaa0584bf9b0ee67ad474e08c141d382bf6352ffb103"
	I1027 18:59:28.292403  369169 cri.go:89] found id: "27f1c94c3f5736bca109359ef14c6315dca30f3a92e432a313912785f638d339"
	I1027 18:59:28.292405  369169 cri.go:89] found id: "b7494b1ab076bec5211fe9aa45d869fd06dce709b51652f81a21756c0087c5dc"
	I1027 18:59:28.292407  369169 cri.go:89] found id: "2f642c7cbe9094287b843be457ec991af2d6a4e3a7c89d0cef2628b88a0df390"
	I1027 18:59:28.292410  369169 cri.go:89] found id: "ca7a93241189c56d1808a8b7fb428d8057429bed2f6554b65716f5aeecd49b88"
	I1027 18:59:28.292412  369169 cri.go:89] found id: "2095fff76306861533792ed7f54dec0997d67f3656557a857ff7af3b00429cda"
	I1027 18:59:28.292416  369169 cri.go:89] found id: "eede6880efbc9e505b955efd78f6cc85e44d1edb5f142fe3df44034a4341a14f"
	I1027 18:59:28.292419  369169 cri.go:89] found id: "ba1ddd191addfbafb743bfd31989a110bd5b0f58f7479075c129e528745e7798"
	I1027 18:59:28.292423  369169 cri.go:89] found id: "abbe027d3dc3b813b338a56e8cabab82e03eb9b112b7b850abb79fefe6d06ad7"
	I1027 18:59:28.292425  369169 cri.go:89] found id: "12e10d7e88fff07d51f12a561be95b0933cdc57cc59e0f478fe8964c53f1806b"
	I1027 18:59:28.292427  369169 cri.go:89] found id: "6d05a2b6be1fb2b8475a215eb50681a592a20257978b9da0091741666c9fa5c6"
	I1027 18:59:28.292435  369169 cri.go:89] found id: "c02f8fc8e6a7392b824780b7cf27bac4f0cee905aafadcc2295bf2775ce85316"
	I1027 18:59:28.292437  369169 cri.go:89] found id: "95468d8526baeb9ed07c582a77c3593017052fb17f3ce84741a67f91794b7400"
	I1027 18:59:28.292440  369169 cri.go:89] found id: "81cd0a11514aba345e443fd708bb0a4b65a29f336aec8643a57037ceeda8aefe"
	I1027 18:59:28.292442  369169 cri.go:89] found id: "f25d173d59b5ba978f27e915fc30ff6e02ab5bba952c2af598b464a59edc1987"
	I1027 18:59:28.292445  369169 cri.go:89] found id: ""
	I1027 18:59:28.292489  369169 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 18:59:28.309984  369169 out.go:203] 
	W1027 18:59:28.311326  369169 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T18:59:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T18:59:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 18:59:28.311355  369169 out.go:285] * 
	* 
	W1027 18:59:28.316099  369169 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 18:59:28.317883  369169 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-589824 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.17s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-5m5rl" [911fc5e9-aa0b-494e-8eff-0c513d2b6625] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003385589s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-589824 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-589824 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (257.332349ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 18:59:22.851859  368638 out.go:360] Setting OutFile to fd 1 ...
	I1027 18:59:22.852126  368638 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:59:22.852150  368638 out.go:374] Setting ErrFile to fd 2...
	I1027 18:59:22.852155  368638 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:59:22.852338  368638 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 18:59:22.852598  368638 mustload.go:65] Loading cluster: addons-589824
	I1027 18:59:22.852938  368638 config.go:182] Loaded profile config "addons-589824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:59:22.852955  368638 addons.go:606] checking whether the cluster is paused
	I1027 18:59:22.853031  368638 config.go:182] Loaded profile config "addons-589824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:59:22.853048  368638 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:59:22.853416  368638 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:59:22.871589  368638 ssh_runner.go:195] Run: systemctl --version
	I1027 18:59:22.871658  368638 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:59:22.893003  368638 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:59:22.993965  368638 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 18:59:22.994058  368638 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 18:59:23.024430  368638 cri.go:89] found id: "0a17a4745cc1a6104ea6432d9fd60dac6e6abe764b5d1330d69426fa0b74a6ab"
	I1027 18:59:23.024456  368638 cri.go:89] found id: "a30f678907200483df6ff7630d767bc8daa14ce81d7f9088b61ad45ee3d0afab"
	I1027 18:59:23.024462  368638 cri.go:89] found id: "db7343377b38897cf4a8cf603f6e486663fecd5587924e1ed818db6d54bdcce6"
	I1027 18:59:23.024467  368638 cri.go:89] found id: "71e53e748e01fc8c91ffa4fb8b7865bea26bcbe65dcba958949295c6f0037da7"
	I1027 18:59:23.024470  368638 cri.go:89] found id: "56024f3c5df317e559a2fc01d91706e2a21e755612591d33569756c8b235a739"
	I1027 18:59:23.024473  368638 cri.go:89] found id: "ef768854ff28223563c69a32d2834fab10262b7e6a6963c625600582d59b9e51"
	I1027 18:59:23.024475  368638 cri.go:89] found id: "76e187a2847661d9eb59daefd89617bc458e7238cd87c5b6b4e6c6f1884d4826"
	I1027 18:59:23.024478  368638 cri.go:89] found id: "0c23d9067a021958f6e78dae17e3e314bb8f01a59a277d6d231a1c91ac243402"
	I1027 18:59:23.024480  368638 cri.go:89] found id: "6feb37f12d4a362a4be9862cfb4d525092b27f5c8806b5fe7f3e6992e40865b1"
	I1027 18:59:23.024486  368638 cri.go:89] found id: "2dc898f8fa5b3f56f21afaa0584bf9b0ee67ad474e08c141d382bf6352ffb103"
	I1027 18:59:23.024496  368638 cri.go:89] found id: "27f1c94c3f5736bca109359ef14c6315dca30f3a92e432a313912785f638d339"
	I1027 18:59:23.024515  368638 cri.go:89] found id: "b7494b1ab076bec5211fe9aa45d869fd06dce709b51652f81a21756c0087c5dc"
	I1027 18:59:23.024521  368638 cri.go:89] found id: "2f642c7cbe9094287b843be457ec991af2d6a4e3a7c89d0cef2628b88a0df390"
	I1027 18:59:23.024523  368638 cri.go:89] found id: "ca7a93241189c56d1808a8b7fb428d8057429bed2f6554b65716f5aeecd49b88"
	I1027 18:59:23.024525  368638 cri.go:89] found id: "2095fff76306861533792ed7f54dec0997d67f3656557a857ff7af3b00429cda"
	I1027 18:59:23.024529  368638 cri.go:89] found id: "eede6880efbc9e505b955efd78f6cc85e44d1edb5f142fe3df44034a4341a14f"
	I1027 18:59:23.024532  368638 cri.go:89] found id: "ba1ddd191addfbafb743bfd31989a110bd5b0f58f7479075c129e528745e7798"
	I1027 18:59:23.024535  368638 cri.go:89] found id: "abbe027d3dc3b813b338a56e8cabab82e03eb9b112b7b850abb79fefe6d06ad7"
	I1027 18:59:23.024538  368638 cri.go:89] found id: "12e10d7e88fff07d51f12a561be95b0933cdc57cc59e0f478fe8964c53f1806b"
	I1027 18:59:23.024540  368638 cri.go:89] found id: "6d05a2b6be1fb2b8475a215eb50681a592a20257978b9da0091741666c9fa5c6"
	I1027 18:59:23.024543  368638 cri.go:89] found id: "c02f8fc8e6a7392b824780b7cf27bac4f0cee905aafadcc2295bf2775ce85316"
	I1027 18:59:23.024545  368638 cri.go:89] found id: "95468d8526baeb9ed07c582a77c3593017052fb17f3ce84741a67f91794b7400"
	I1027 18:59:23.024547  368638 cri.go:89] found id: "81cd0a11514aba345e443fd708bb0a4b65a29f336aec8643a57037ceeda8aefe"
	I1027 18:59:23.024549  368638 cri.go:89] found id: "f25d173d59b5ba978f27e915fc30ff6e02ab5bba952c2af598b464a59edc1987"
	I1027 18:59:23.024552  368638 cri.go:89] found id: ""
	I1027 18:59:23.024598  368638 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 18:59:23.039260  368638 out.go:203] 
	W1027 18:59:23.040630  368638 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T18:59:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T18:59:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 18:59:23.040645  368638 out.go:285] * 
	* 
	W1027 18:59:23.044603  368638 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 18:59:23.046179  368638 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-589824 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.26s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-m5mql" [46a72cdb-ebc4-4287-afb3-95c8b797d750] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003301913s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-589824 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-589824 addons disable yakd --alsologtostderr -v=1: exit status 11 (252.673451ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 18:59:19.952916  368255 out.go:360] Setting OutFile to fd 1 ...
	I1027 18:59:19.953213  368255 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:59:19.953223  368255 out.go:374] Setting ErrFile to fd 2...
	I1027 18:59:19.953228  368255 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:59:19.953442  368255 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 18:59:19.953710  368255 mustload.go:65] Loading cluster: addons-589824
	I1027 18:59:19.954045  368255 config.go:182] Loaded profile config "addons-589824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:59:19.954061  368255 addons.go:606] checking whether the cluster is paused
	I1027 18:59:19.954155  368255 config.go:182] Loaded profile config "addons-589824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:59:19.954174  368255 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:59:19.954539  368255 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:59:19.972630  368255 ssh_runner.go:195] Run: systemctl --version
	I1027 18:59:19.972701  368255 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:59:19.991284  368255 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:59:20.092027  368255 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 18:59:20.092098  368255 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 18:59:20.123178  368255 cri.go:89] found id: "0a17a4745cc1a6104ea6432d9fd60dac6e6abe764b5d1330d69426fa0b74a6ab"
	I1027 18:59:20.123209  368255 cri.go:89] found id: "a30f678907200483df6ff7630d767bc8daa14ce81d7f9088b61ad45ee3d0afab"
	I1027 18:59:20.123213  368255 cri.go:89] found id: "db7343377b38897cf4a8cf603f6e486663fecd5587924e1ed818db6d54bdcce6"
	I1027 18:59:20.123216  368255 cri.go:89] found id: "71e53e748e01fc8c91ffa4fb8b7865bea26bcbe65dcba958949295c6f0037da7"
	I1027 18:59:20.123219  368255 cri.go:89] found id: "56024f3c5df317e559a2fc01d91706e2a21e755612591d33569756c8b235a739"
	I1027 18:59:20.123224  368255 cri.go:89] found id: "ef768854ff28223563c69a32d2834fab10262b7e6a6963c625600582d59b9e51"
	I1027 18:59:20.123226  368255 cri.go:89] found id: "76e187a2847661d9eb59daefd89617bc458e7238cd87c5b6b4e6c6f1884d4826"
	I1027 18:59:20.123229  368255 cri.go:89] found id: "0c23d9067a021958f6e78dae17e3e314bb8f01a59a277d6d231a1c91ac243402"
	I1027 18:59:20.123232  368255 cri.go:89] found id: "6feb37f12d4a362a4be9862cfb4d525092b27f5c8806b5fe7f3e6992e40865b1"
	I1027 18:59:20.123250  368255 cri.go:89] found id: "2dc898f8fa5b3f56f21afaa0584bf9b0ee67ad474e08c141d382bf6352ffb103"
	I1027 18:59:20.123253  368255 cri.go:89] found id: "27f1c94c3f5736bca109359ef14c6315dca30f3a92e432a313912785f638d339"
	I1027 18:59:20.123256  368255 cri.go:89] found id: "b7494b1ab076bec5211fe9aa45d869fd06dce709b51652f81a21756c0087c5dc"
	I1027 18:59:20.123258  368255 cri.go:89] found id: "2f642c7cbe9094287b843be457ec991af2d6a4e3a7c89d0cef2628b88a0df390"
	I1027 18:59:20.123261  368255 cri.go:89] found id: "ca7a93241189c56d1808a8b7fb428d8057429bed2f6554b65716f5aeecd49b88"
	I1027 18:59:20.123264  368255 cri.go:89] found id: "2095fff76306861533792ed7f54dec0997d67f3656557a857ff7af3b00429cda"
	I1027 18:59:20.123276  368255 cri.go:89] found id: "eede6880efbc9e505b955efd78f6cc85e44d1edb5f142fe3df44034a4341a14f"
	I1027 18:59:20.123283  368255 cri.go:89] found id: "ba1ddd191addfbafb743bfd31989a110bd5b0f58f7479075c129e528745e7798"
	I1027 18:59:20.123288  368255 cri.go:89] found id: "abbe027d3dc3b813b338a56e8cabab82e03eb9b112b7b850abb79fefe6d06ad7"
	I1027 18:59:20.123290  368255 cri.go:89] found id: "12e10d7e88fff07d51f12a561be95b0933cdc57cc59e0f478fe8964c53f1806b"
	I1027 18:59:20.123293  368255 cri.go:89] found id: "6d05a2b6be1fb2b8475a215eb50681a592a20257978b9da0091741666c9fa5c6"
	I1027 18:59:20.123295  368255 cri.go:89] found id: "c02f8fc8e6a7392b824780b7cf27bac4f0cee905aafadcc2295bf2775ce85316"
	I1027 18:59:20.123297  368255 cri.go:89] found id: "95468d8526baeb9ed07c582a77c3593017052fb17f3ce84741a67f91794b7400"
	I1027 18:59:20.123299  368255 cri.go:89] found id: "81cd0a11514aba345e443fd708bb0a4b65a29f336aec8643a57037ceeda8aefe"
	I1027 18:59:20.123302  368255 cri.go:89] found id: "f25d173d59b5ba978f27e915fc30ff6e02ab5bba952c2af598b464a59edc1987"
	I1027 18:59:20.123304  368255 cri.go:89] found id: ""
	I1027 18:59:20.123352  368255 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 18:59:20.138433  368255 out.go:203] 
	W1027 18:59:20.139857  368255 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T18:59:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T18:59:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 18:59:20.139880  368255 out.go:285] * 
	* 
	W1027 18:59:20.144004  368255 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 18:59:20.145500  368255 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-589824 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.26s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-6nrwh" [5a9374bd-7f34-436b-aed2-97c869cd1032] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.00388911s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-589824 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-589824 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (252.399843ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 18:59:17.593742  368020 out.go:360] Setting OutFile to fd 1 ...
	I1027 18:59:17.594042  368020 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:59:17.594053  368020 out.go:374] Setting ErrFile to fd 2...
	I1027 18:59:17.594058  368020 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:59:17.594304  368020 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 18:59:17.594636  368020 mustload.go:65] Loading cluster: addons-589824
	I1027 18:59:17.595056  368020 config.go:182] Loaded profile config "addons-589824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:59:17.595077  368020 addons.go:606] checking whether the cluster is paused
	I1027 18:59:17.595187  368020 config.go:182] Loaded profile config "addons-589824": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 18:59:17.595207  368020 host.go:66] Checking if "addons-589824" exists ...
	I1027 18:59:17.595596  368020 cli_runner.go:164] Run: docker container inspect addons-589824 --format={{.State.Status}}
	I1027 18:59:17.614708  368020 ssh_runner.go:195] Run: systemctl --version
	I1027 18:59:17.614760  368020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-589824
	I1027 18:59:17.632336  368020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33140 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/addons-589824/id_rsa Username:docker}
	I1027 18:59:17.732101  368020 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 18:59:17.732199  368020 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 18:59:17.761907  368020 cri.go:89] found id: "0a17a4745cc1a6104ea6432d9fd60dac6e6abe764b5d1330d69426fa0b74a6ab"
	I1027 18:59:17.761938  368020 cri.go:89] found id: "a30f678907200483df6ff7630d767bc8daa14ce81d7f9088b61ad45ee3d0afab"
	I1027 18:59:17.761942  368020 cri.go:89] found id: "db7343377b38897cf4a8cf603f6e486663fecd5587924e1ed818db6d54bdcce6"
	I1027 18:59:17.761946  368020 cri.go:89] found id: "71e53e748e01fc8c91ffa4fb8b7865bea26bcbe65dcba958949295c6f0037da7"
	I1027 18:59:17.761949  368020 cri.go:89] found id: "56024f3c5df317e559a2fc01d91706e2a21e755612591d33569756c8b235a739"
	I1027 18:59:17.761953  368020 cri.go:89] found id: "ef768854ff28223563c69a32d2834fab10262b7e6a6963c625600582d59b9e51"
	I1027 18:59:17.761956  368020 cri.go:89] found id: "76e187a2847661d9eb59daefd89617bc458e7238cd87c5b6b4e6c6f1884d4826"
	I1027 18:59:17.761958  368020 cri.go:89] found id: "0c23d9067a021958f6e78dae17e3e314bb8f01a59a277d6d231a1c91ac243402"
	I1027 18:59:17.761961  368020 cri.go:89] found id: "6feb37f12d4a362a4be9862cfb4d525092b27f5c8806b5fe7f3e6992e40865b1"
	I1027 18:59:17.761970  368020 cri.go:89] found id: "2dc898f8fa5b3f56f21afaa0584bf9b0ee67ad474e08c141d382bf6352ffb103"
	I1027 18:59:17.761973  368020 cri.go:89] found id: "27f1c94c3f5736bca109359ef14c6315dca30f3a92e432a313912785f638d339"
	I1027 18:59:17.761975  368020 cri.go:89] found id: "b7494b1ab076bec5211fe9aa45d869fd06dce709b51652f81a21756c0087c5dc"
	I1027 18:59:17.761978  368020 cri.go:89] found id: "2f642c7cbe9094287b843be457ec991af2d6a4e3a7c89d0cef2628b88a0df390"
	I1027 18:59:17.761980  368020 cri.go:89] found id: "ca7a93241189c56d1808a8b7fb428d8057429bed2f6554b65716f5aeecd49b88"
	I1027 18:59:17.761982  368020 cri.go:89] found id: "2095fff76306861533792ed7f54dec0997d67f3656557a857ff7af3b00429cda"
	I1027 18:59:17.761989  368020 cri.go:89] found id: "eede6880efbc9e505b955efd78f6cc85e44d1edb5f142fe3df44034a4341a14f"
	I1027 18:59:17.761992  368020 cri.go:89] found id: "ba1ddd191addfbafb743bfd31989a110bd5b0f58f7479075c129e528745e7798"
	I1027 18:59:17.761996  368020 cri.go:89] found id: "abbe027d3dc3b813b338a56e8cabab82e03eb9b112b7b850abb79fefe6d06ad7"
	I1027 18:59:17.761998  368020 cri.go:89] found id: "12e10d7e88fff07d51f12a561be95b0933cdc57cc59e0f478fe8964c53f1806b"
	I1027 18:59:17.762000  368020 cri.go:89] found id: "6d05a2b6be1fb2b8475a215eb50681a592a20257978b9da0091741666c9fa5c6"
	I1027 18:59:17.762003  368020 cri.go:89] found id: "c02f8fc8e6a7392b824780b7cf27bac4f0cee905aafadcc2295bf2775ce85316"
	I1027 18:59:17.762005  368020 cri.go:89] found id: "95468d8526baeb9ed07c582a77c3593017052fb17f3ce84741a67f91794b7400"
	I1027 18:59:17.762007  368020 cri.go:89] found id: "81cd0a11514aba345e443fd708bb0a4b65a29f336aec8643a57037ceeda8aefe"
	I1027 18:59:17.762009  368020 cri.go:89] found id: "f25d173d59b5ba978f27e915fc30ff6e02ab5bba952c2af598b464a59edc1987"
	I1027 18:59:17.762012  368020 cri.go:89] found id: ""
	I1027 18:59:17.762068  368020 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 18:59:17.776630  368020 out.go:203] 
	W1027 18:59:17.777811  368020 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T18:59:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T18:59:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 18:59:17.777832  368020 out.go:285] * 
	* 
	W1027 18:59:17.781779  368020 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 18:59:17.783467  368020 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-589824 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-051715 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-051715 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-kgd6p" [93eec8db-ab42-473a-a900-c603220cfd41] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-051715 -n functional-051715
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-27 19:14:51.657708562 +0000 UTC m=+1115.133125780
functional_test.go:1645: (dbg) Run:  kubectl --context functional-051715 describe po hello-node-connect-7d85dfc575-kgd6p -n default
functional_test.go:1645: (dbg) kubectl --context functional-051715 describe po hello-node-connect-7d85dfc575-kgd6p -n default:
Name:             hello-node-connect-7d85dfc575-kgd6p
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-051715/192.168.49.2
Start Time:       Mon, 27 Oct 2025 19:04:51 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.12
IPs:
IP:           10.244.0.12
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kh66p (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-kh66p:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-kgd6p to functional-051715
Normal   Pulling    7m10s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m10s (x5 over 10m)     kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m10s (x5 over 10m)     kubelet            Error: ErrImagePull
Normal   BackOff    4m51s (x22 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m51s (x22 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-051715 logs hello-node-connect-7d85dfc575-kgd6p -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-051715 logs hello-node-connect-7d85dfc575-kgd6p -n default: exit status 1 (61.748728ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-kgd6p" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-051715 logs hello-node-connect-7d85dfc575-kgd6p -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-051715 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-kgd6p
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-051715/192.168.49.2
Start Time:       Mon, 27 Oct 2025 19:04:51 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.12
IPs:
IP:           10.244.0.12
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kh66p (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-kh66p:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-kgd6p to functional-051715
Normal   Pulling    7m10s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m10s (x5 over 10m)     kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m10s (x5 over 10m)     kubelet            Error: ErrImagePull
Normal   BackOff    4m51s (x22 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m51s (x22 over 9m59s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-051715 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-051715 logs -l app=hello-node-connect: exit status 1 (64.060537ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-kgd6p" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-051715 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-051715 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.109.46.29
IPs:                      10.109.46.29
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31223/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-051715
helpers_test.go:243: (dbg) docker inspect functional-051715:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "245b389758990e10d3084ea5be0b9652996d6a81b68e6fb5a1af578673dc1819",
	        "Created": "2025-10-27T19:02:49.56305505Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 380403,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T19:02:49.602898612Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/245b389758990e10d3084ea5be0b9652996d6a81b68e6fb5a1af578673dc1819/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/245b389758990e10d3084ea5be0b9652996d6a81b68e6fb5a1af578673dc1819/hostname",
	        "HostsPath": "/var/lib/docker/containers/245b389758990e10d3084ea5be0b9652996d6a81b68e6fb5a1af578673dc1819/hosts",
	        "LogPath": "/var/lib/docker/containers/245b389758990e10d3084ea5be0b9652996d6a81b68e6fb5a1af578673dc1819/245b389758990e10d3084ea5be0b9652996d6a81b68e6fb5a1af578673dc1819-json.log",
	        "Name": "/functional-051715",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-051715:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-051715",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "245b389758990e10d3084ea5be0b9652996d6a81b68e6fb5a1af578673dc1819",
	                "LowerDir": "/var/lib/docker/overlay2/a63e016658972e9b6c4fd2d135e266f0cadc5e9f86523b5fefa37c0ce0d1c975-init/diff:/var/lib/docker/overlay2/71b61ec94610a35f2d924dec358052d4c154c36b3fe219802f60246ca2dc7f45/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a63e016658972e9b6c4fd2d135e266f0cadc5e9f86523b5fefa37c0ce0d1c975/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a63e016658972e9b6c4fd2d135e266f0cadc5e9f86523b5fefa37c0ce0d1c975/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a63e016658972e9b6c4fd2d135e266f0cadc5e9f86523b5fefa37c0ce0d1c975/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-051715",
	                "Source": "/var/lib/docker/volumes/functional-051715/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-051715",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-051715",
	                "name.minikube.sigs.k8s.io": "functional-051715",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "71e350050150151d8e192a0ff7e0189d02906626c55b6c24bdc8c5242f5fe8cb",
	            "SandboxKey": "/var/run/docker/netns/71e350050150",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33154"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33153"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-051715": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:e6:e7:d2:d5:65",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f9e396e89094888efe86e1e68bd6c6caba30ffddbd9184972763383c039e74fb",
	                    "EndpointID": "e031d0efddaccad944a796ac5793b89a192d210bc4ce511eacc163994488780d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-051715",
	                        "245b38975899"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-051715 -n functional-051715
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-051715 logs -n 25: (1.353613751s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                              ARGS                                                                               │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-051715 image ls                                                                                                                                      │ functional-051715 │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image          │ functional-051715 image load --daemon kicbase/echo-server:functional-051715 --alsologtostderr                                                                   │ functional-051715 │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image          │ functional-051715 image ls                                                                                                                                      │ functional-051715 │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image          │ functional-051715 image load --daemon kicbase/echo-server:functional-051715 --alsologtostderr                                                                   │ functional-051715 │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image          │ functional-051715 image ls                                                                                                                                      │ functional-051715 │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image          │ functional-051715 image save kicbase/echo-server:functional-051715 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-051715 │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image          │ functional-051715 image rm kicbase/echo-server:functional-051715 --alsologtostderr                                                                              │ functional-051715 │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image          │ functional-051715 image ls                                                                                                                                      │ functional-051715 │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image          │ functional-051715 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                       │ functional-051715 │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image          │ functional-051715 image save --daemon kicbase/echo-server:functional-051715 --alsologtostderr                                                                   │ functional-051715 │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ update-context │ functional-051715 update-context --alsologtostderr -v=2                                                                                                         │ functional-051715 │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ update-context │ functional-051715 update-context --alsologtostderr -v=2                                                                                                         │ functional-051715 │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ update-context │ functional-051715 update-context --alsologtostderr -v=2                                                                                                         │ functional-051715 │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image          │ functional-051715 image ls --format short --alsologtostderr                                                                                                     │ functional-051715 │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ ssh            │ functional-051715 ssh pgrep buildkitd                                                                                                                           │ functional-051715 │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │                     │
	│ image          │ functional-051715 image ls --format yaml --alsologtostderr                                                                                                      │ functional-051715 │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image          │ functional-051715 image build -t localhost/my-image:functional-051715 testdata/build --alsologtostderr                                                          │ functional-051715 │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:05 UTC │
	│ image          │ functional-051715 image ls --format json --alsologtostderr                                                                                                      │ functional-051715 │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image          │ functional-051715 image ls --format table --alsologtostderr                                                                                                     │ functional-051715 │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image          │ functional-051715 image ls                                                                                                                                      │ functional-051715 │ jenkins │ v1.37.0 │ 27 Oct 25 19:05 UTC │ 27 Oct 25 19:05 UTC │
	│ service        │ functional-051715 service list                                                                                                                                  │ functional-051715 │ jenkins │ v1.37.0 │ 27 Oct 25 19:14 UTC │ 27 Oct 25 19:14 UTC │
	│ service        │ functional-051715 service list -o json                                                                                                                          │ functional-051715 │ jenkins │ v1.37.0 │ 27 Oct 25 19:14 UTC │ 27 Oct 25 19:14 UTC │
	│ service        │ functional-051715 service --namespace=default --https --url hello-node                                                                                          │ functional-051715 │ jenkins │ v1.37.0 │ 27 Oct 25 19:14 UTC │                     │
	│ service        │ functional-051715 service hello-node --url --format={{.IP}}                                                                                                     │ functional-051715 │ jenkins │ v1.37.0 │ 27 Oct 25 19:14 UTC │                     │
	│ service        │ functional-051715 service hello-node --url                                                                                                                      │ functional-051715 │ jenkins │ v1.37.0 │ 27 Oct 25 19:14 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 19:04:50
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 19:04:50.243467  394873 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:04:50.243771  394873 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:04:50.243782  394873 out.go:374] Setting ErrFile to fd 2...
	I1027 19:04:50.243789  394873 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:04:50.244037  394873 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 19:04:50.244568  394873 out.go:368] Setting JSON to false
	I1027 19:04:50.245609  394873 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6439,"bootTime":1761585451,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 19:04:50.245712  394873 start.go:141] virtualization: kvm guest
	I1027 19:04:50.247854  394873 out.go:179] * [functional-051715] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 19:04:50.249405  394873 notify.go:220] Checking for updates...
	I1027 19:04:50.249439  394873 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 19:04:50.251087  394873 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 19:04:50.252645  394873 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:04:50.253937  394873 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-352833/.minikube
	I1027 19:04:50.255321  394873 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 19:04:50.256887  394873 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 19:04:50.258769  394873 config.go:182] Loaded profile config "functional-051715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:04:50.259257  394873 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 19:04:50.285914  394873 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 19:04:50.286023  394873 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:04:50.346662  394873 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-27 19:04:50.33539859 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:04:50.346783  394873 docker.go:318] overlay module found
	I1027 19:04:50.348873  394873 out.go:179] * Using the docker driver based on existing profile
	I1027 19:04:50.350314  394873 start.go:305] selected driver: docker
	I1027 19:04:50.350347  394873 start.go:925] validating driver "docker" against &{Name:functional-051715 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-051715 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:04:50.350462  394873 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 19:04:50.350572  394873 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:04:50.408994  394873 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-27 19:04:50.39907042 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:04:50.409675  394873 cni.go:84] Creating CNI manager for ""
	I1027 19:04:50.409738  394873 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:04:50.409781  394873 start.go:349] cluster config:
	{Name:functional-051715 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-051715 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:04:50.411638  394873 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Oct 27 19:04:55 functional-051715 crio[3574]: time="2025-10-27T19:04:55.471572068Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-051715 found" id=179c72c8-096e-48fa-853d-cf851a64faba name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:04:57 functional-051715 crio[3574]: time="2025-10-27T19:04:57.882151987Z" level=info msg="Stopping pod sandbox: d1b95bb0fc52961847f31b022529112be4b51d3103d37835d74c13049c6b5446" id=3dfdd5b2-783f-4ed5-8d7a-4377cbdf5062 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 27 19:04:57 functional-051715 crio[3574]: time="2025-10-27T19:04:57.882258155Z" level=info msg="Stopped pod sandbox (already stopped): d1b95bb0fc52961847f31b022529112be4b51d3103d37835d74c13049c6b5446" id=3dfdd5b2-783f-4ed5-8d7a-4377cbdf5062 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 27 19:04:57 functional-051715 crio[3574]: time="2025-10-27T19:04:57.882758747Z" level=info msg="Removing pod sandbox: d1b95bb0fc52961847f31b022529112be4b51d3103d37835d74c13049c6b5446" id=dc65d5c6-eeb7-47ce-8534-5924909697a0 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 27 19:04:57 functional-051715 crio[3574]: time="2025-10-27T19:04:57.886320533Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 27 19:04:57 functional-051715 crio[3574]: time="2025-10-27T19:04:57.886404268Z" level=info msg="Removed pod sandbox: d1b95bb0fc52961847f31b022529112be4b51d3103d37835d74c13049c6b5446" id=dc65d5c6-eeb7-47ce-8534-5924909697a0 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 27 19:04:57 functional-051715 crio[3574]: time="2025-10-27T19:04:57.88698873Z" level=info msg="Stopping pod sandbox: 5207b853b41a543baa6288af3c26e0bf07fd4bb6e7c82728dedde36ccde2fa34" id=06e96864-1e9b-4600-9e2c-8ff142281a87 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 27 19:04:57 functional-051715 crio[3574]: time="2025-10-27T19:04:57.887042184Z" level=info msg="Stopped pod sandbox (already stopped): 5207b853b41a543baa6288af3c26e0bf07fd4bb6e7c82728dedde36ccde2fa34" id=06e96864-1e9b-4600-9e2c-8ff142281a87 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 27 19:04:57 functional-051715 crio[3574]: time="2025-10-27T19:04:57.887474608Z" level=info msg="Removing pod sandbox: 5207b853b41a543baa6288af3c26e0bf07fd4bb6e7c82728dedde36ccde2fa34" id=6bff576b-97f1-43d7-bed4-28cc9abc87e2 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 27 19:04:57 functional-051715 crio[3574]: time="2025-10-27T19:04:57.890971963Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 27 19:04:57 functional-051715 crio[3574]: time="2025-10-27T19:04:57.891084564Z" level=info msg="Removed pod sandbox: 5207b853b41a543baa6288af3c26e0bf07fd4bb6e7c82728dedde36ccde2fa34" id=6bff576b-97f1-43d7-bed4-28cc9abc87e2 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 27 19:04:57 functional-051715 crio[3574]: time="2025-10-27T19:04:57.891596719Z" level=info msg="Stopping pod sandbox: bfc98913a3483bcb83d63a696d91d67c1155ee01f279c8b9c4d5c88c69240a82" id=d7b36e98-fe76-4b8e-a456-76a465228e30 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 27 19:04:57 functional-051715 crio[3574]: time="2025-10-27T19:04:57.891652091Z" level=info msg="Stopped pod sandbox (already stopped): bfc98913a3483bcb83d63a696d91d67c1155ee01f279c8b9c4d5c88c69240a82" id=d7b36e98-fe76-4b8e-a456-76a465228e30 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 27 19:04:57 functional-051715 crio[3574]: time="2025-10-27T19:04:57.892007307Z" level=info msg="Removing pod sandbox: bfc98913a3483bcb83d63a696d91d67c1155ee01f279c8b9c4d5c88c69240a82" id=234cbaae-715d-42ea-be90-c9f7c5053571 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 27 19:04:57 functional-051715 crio[3574]: time="2025-10-27T19:04:57.894800092Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 27 19:04:57 functional-051715 crio[3574]: time="2025-10-27T19:04:57.894875115Z" level=info msg="Removed pod sandbox: bfc98913a3483bcb83d63a696d91d67c1155ee01f279c8b9c4d5c88c69240a82" id=234cbaae-715d-42ea-be90-c9f7c5053571 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 27 19:05:04 functional-051715 crio[3574]: time="2025-10-27T19:05:04.897401306Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=6a8a1f8f-e1ec-409b-b049-3bb68def4779 name=/runtime.v1.ImageService/PullImage
	Oct 27 19:05:05 functional-051715 crio[3574]: time="2025-10-27T19:05:05.898606758Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=c17f8af6-5165-481e-9254-0110ceb5e612 name=/runtime.v1.ImageService/PullImage
	Oct 27 19:05:27 functional-051715 crio[3574]: time="2025-10-27T19:05:27.897728665Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=1cf5f0e5-3d57-4dd3-9cc5-efc6d0be0ff8 name=/runtime.v1.ImageService/PullImage
	Oct 27 19:06:00 functional-051715 crio[3574]: time="2025-10-27T19:06:00.897098259Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=53f91e9e-acee-4896-b048-52a1ecca5bce name=/runtime.v1.ImageService/PullImage
	Oct 27 19:06:15 functional-051715 crio[3574]: time="2025-10-27T19:06:15.897094629Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=770e7039-280e-44a0-bada-c6915d70c11b name=/runtime.v1.ImageService/PullImage
	Oct 27 19:07:29 functional-051715 crio[3574]: time="2025-10-27T19:07:29.896435268Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=8799aa38-b6ad-4351-89d3-5753c6bd7235 name=/runtime.v1.ImageService/PullImage
	Oct 27 19:07:41 functional-051715 crio[3574]: time="2025-10-27T19:07:41.897383306Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=a250bde7-2192-4a59-bac0-59dae2348512 name=/runtime.v1.ImageService/PullImage
	Oct 27 19:10:19 functional-051715 crio[3574]: time="2025-10-27T19:10:19.897960583Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=7cb65436-41bc-45c3-b43e-efdf2a0ecd08 name=/runtime.v1.ImageService/PullImage
	Oct 27 19:10:27 functional-051715 crio[3574]: time="2025-10-27T19:10:27.897158364Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=7ee9f3ee-28ac-4012-a45c-3bbfb090d2d5 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	0fc9bd7b7105d       docker.io/library/nginx@sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8                  10 minutes ago      Running             myfrontend                  0                   fcc976d00e8a4       sp-pod                                       default
	478c824828a56       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         10 minutes ago      Running             kubernetes-dashboard        0                   21cc83e510af0       kubernetes-dashboard-855c9754f9-jqrgm        kubernetes-dashboard
	c03df84b23ca5       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   10 minutes ago      Running             dashboard-metrics-scraper   0                   6f1e851b4e696       dashboard-metrics-scraper-77bf4d6c4c-ggtwl   kubernetes-dashboard
	6c277da7b15bf       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e                  10 minutes ago      Running             nginx                       0                   23f5a1cfb2089       nginx-svc                                    default
	d376911fa2466       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              10 minutes ago      Exited              mount-munger                0                   071992cfedf35       busybox-mount                                default
	ca007ace6ab5b       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  10 minutes ago      Running             mysql                       0                   fd297735edf58       mysql-5bb876957f-bdflz                       default
	2f92f53b58529       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Running             kube-apiserver              0                   f082dca8c42da       kube-apiserver-functional-051715             kube-system
	a9eedc1ccb210       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Running             kube-controller-manager     1                   24caf4b823e86       kube-controller-manager-functional-051715    kube-system
	fab941cd6e517       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        1                   4fef00b12df63       etcd-functional-051715                       kube-system
	365d419dc66a8       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Running             kube-proxy                  1                   20930f1668e19       kube-proxy-wvgdt                             kube-system
	ae05964b7150f       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 11 minutes ago      Running             kube-scheduler              1                   31a91ac724d34       kube-scheduler-functional-051715             kube-system
	e41295fadac04       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Running             storage-provisioner         1                   fb359caee78de       storage-provisioner                          kube-system
	f4863001cd86c       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Running             kindnet-cni                 1                   9be107053b479       kindnet-crk7f                                kube-system
	1a7b88475261e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Running             coredns                     1                   8c085390b7099       coredns-66bc5c9577-vh4lq                     kube-system
	2f899ea37ab03       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   8c085390b7099       coredns-66bc5c9577-vh4lq                     kube-system
	c4e0eb5c03697       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         0                   fb359caee78de       storage-provisioner                          kube-system
	d77c646529991       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Exited              kindnet-cni                 0                   9be107053b479       kindnet-crk7f                                kube-system
	721621ca8825c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Exited              kube-proxy                  0                   20930f1668e19       kube-proxy-wvgdt                             kube-system
	d58572f13c4a4       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 11 minutes ago      Exited              kube-scheduler              0                   31a91ac724d34       kube-scheduler-functional-051715             kube-system
	0fb7b63064bb1       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 11 minutes ago      Exited              etcd                        0                   4fef00b12df63       etcd-functional-051715                       kube-system
	05076c61af4ab       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 11 minutes ago      Exited              kube-controller-manager     0                   24caf4b823e86       kube-controller-manager-functional-051715    kube-system
	
	
	==> coredns [1a7b88475261e22c3751a981b1676496c7c55efecfcb421ee8efb024610d5051] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37762 - 30793 "HINFO IN 2587204031997267901.3084167716414586420. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.081389097s
	
	
	==> coredns [2f899ea37ab03cbe66639e2df5d9167fe003443da92d97079871f200865ba309] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53757 - 15765 "HINFO IN 4749524191109287218.3120153438719989164. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.513478856s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-051715
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-051715
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=functional-051715
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T19_03_03_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 19:03:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-051715
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 19:14:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 19:12:20 +0000   Mon, 27 Oct 2025 19:02:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 19:12:20 +0000   Mon, 27 Oct 2025 19:02:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 19:12:20 +0000   Mon, 27 Oct 2025 19:02:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 19:12:20 +0000   Mon, 27 Oct 2025 19:03:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-051715
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                a9e6f55f-351e-4a6f-8958-3a77ac73dc7f
	  Boot ID:                    811bd29c-e64e-4acc-9427-bab1f7caed93
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-bv2lb                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-kgd6p           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-bdflz                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-vh4lq                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-051715                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-crk7f                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-051715              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-051715     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-wvgdt                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-051715              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-ggtwl    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-jqrgm         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-051715 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-051715 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-051715 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           11m                node-controller  Node functional-051715 event: Registered Node functional-051715 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-051715 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-051715 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-051715 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-051715 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-051715 event: Registered Node functional-051715 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 23 52 43 9a ba 08 06
	[  +0.000398] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 50 95 0e df 53 08 06
	[Oct27 18:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.017295] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023893] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023882] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023851] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +2.047849] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +4.031592] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +8.319143] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[ +16.382183] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[Oct27 19:00] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	
	
	==> etcd [0fb7b63064bb13b233a212a8f4bbce4621e12356f21cf465730d7b1239d3bc7e] <==
	{"level":"warn","ts":"2025-10-27T19:03:00.130370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:03:00.137096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:03:00.157585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:03:00.161266Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:03:00.167850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:03:00.174737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:03:00.219083Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52388","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-27T19:03:55.783428Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-27T19:03:55.783517Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-051715","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-27T19:03:55.783600Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-27T19:03:55.783670Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-27T19:03:55.785247Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T19:03:55.785338Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-10-27T19:03:55.785318Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-27T19:03:55.785379Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-10-27T19:03:55.785393Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"error","ts":"2025-10-27T19:03:55.785391Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T19:03:55.785404Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-10-27T19:03:55.785318Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-27T19:03:55.785412Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-27T19:03:55.785418Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T19:03:55.787807Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-27T19:03:55.787893Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-27T19:03:55.787932Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-27T19:03:55.787946Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-051715","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [fab941cd6e517f5fef0dfef83d7cd699344f41c331fbb97eea3d2dd563a55cdc] <==
	{"level":"warn","ts":"2025-10-27T19:03:59.436241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:03:59.443604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:03:59.450703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:03:59.457748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:03:59.467337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:03:59.474124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:03:59.480869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:03:59.487699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:03:59.494272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:03:59.501170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:03:59.508664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:03:59.515127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:03:59.521480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:03:59.527802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:03:59.534813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:03:59.541105Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:03:59.559296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:03:59.565905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:03:59.573646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:03:59.623195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:04:33.177694Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"141.982184ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/busybox-mount\" limit:1 ","response":"range_response_count:1 size:3254"}
	{"level":"info","ts":"2025-10-27T19:04:33.177818Z","caller":"traceutil/trace.go:172","msg":"trace[1051402177] range","detail":"{range_begin:/registry/pods/default/busybox-mount; range_end:; response_count:1; response_revision:660; }","duration":"142.126794ms","start":"2025-10-27T19:04:33.035676Z","end":"2025-10-27T19:04:33.177803Z","steps":["trace[1051402177] 'range keys from in-memory index tree'  (duration: 141.82586ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T19:13:59.117193Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1143}
	{"level":"info","ts":"2025-10-27T19:13:59.137929Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1143,"took":"20.357103ms","hash":940986,"current-db-size-bytes":3452928,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":1511424,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-10-27T19:13:59.137990Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":940986,"revision":1143,"compact-revision":-1}
	
	
	==> kernel <==
	 19:14:53 up  1:57,  0 user,  load average: 0.19, 0.25, 0.45
	Linux functional-051715 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d77c64652999117a73e661a7c90f118015f729efb3fc27ae6b2d2d611482b4ba] <==
	I1027 19:03:09.075170       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 19:03:09.075687       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1027 19:03:09.075878       1 main.go:148] setting mtu 1500 for CNI 
	I1027 19:03:09.075900       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 19:03:09.075925       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T19:03:09Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 19:03:09.373506       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 19:03:09.375359       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 19:03:09.375408       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 19:03:09.375544       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1027 19:03:09.675500       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 19:03:09.675525       1 metrics.go:72] Registering metrics
	I1027 19:03:09.675579       1 controller.go:711] "Syncing nftables rules"
	I1027 19:03:19.375184       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:03:19.375298       1 main.go:301] handling current node
	I1027 19:03:29.381234       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:03:29.381273       1 main.go:301] handling current node
	I1027 19:03:39.374263       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:03:39.374307       1 main.go:301] handling current node
	
	
	==> kindnet [f4863001cd86ced6efe00d675ce36606638b99b1701ca96e8279cb1b84247e72] <==
	I1027 19:12:45.607283       1 main.go:301] handling current node
	I1027 19:12:55.606289       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:12:55.606338       1 main.go:301] handling current node
	I1027 19:13:05.603416       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:13:05.603463       1 main.go:301] handling current node
	I1027 19:13:15.603580       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:13:15.603629       1 main.go:301] handling current node
	I1027 19:13:25.602726       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:13:25.602772       1 main.go:301] handling current node
	I1027 19:13:35.602969       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:13:35.603014       1 main.go:301] handling current node
	I1027 19:13:45.602597       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:13:45.602633       1 main.go:301] handling current node
	I1027 19:13:55.603110       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:13:55.603171       1 main.go:301] handling current node
	I1027 19:14:05.603551       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:14:05.603585       1 main.go:301] handling current node
	I1027 19:14:15.603238       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:14:15.603302       1 main.go:301] handling current node
	I1027 19:14:25.603060       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:14:25.603095       1 main.go:301] handling current node
	I1027 19:14:35.602683       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:14:35.602721       1 main.go:301] handling current node
	I1027 19:14:45.602644       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1027 19:14:45.602691       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2f92f53b58529c91f29f73f89b6f80fbd4dfb153df5b166977478faf23082043] <==
	I1027 19:04:00.909081       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 19:04:00.909081       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 19:04:00.983350       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1027 19:04:01.290582       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1027 19:04:01.291905       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 19:04:01.298329       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 19:04:01.754685       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1027 19:04:01.854940       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 19:04:01.912385       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 19:04:01.918911       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 19:04:03.910989       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 19:04:20.345323       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.105.99.171"}
	I1027 19:04:24.202356       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.101.197.161"}
	I1027 19:04:24.971072       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.96.96.201"}
	E1027 19:04:39.113251       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:35624: use of closed network connection
	E1027 19:04:40.349601       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:35642: use of closed network connection
	I1027 19:04:40.786461       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.98.98.185"}
	E1027 19:04:42.597293       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:35706: use of closed network connection
	I1027 19:04:43.535538       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 19:04:43.643751       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.67.4"}
	I1027 19:04:43.656600       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.245.211"}
	E1027 19:04:44.258220       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:35764: use of closed network connection
	I1027 19:04:51.316427       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.109.46.29"}
	E1027 19:04:57.325881       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:53594: use of closed network connection
	I1027 19:14:00.010932       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [05076c61af4ab9b53321bba67ea10018e108366bd63a0c3adde1f75f825416ae] <==
	I1027 19:03:07.643439       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 19:03:07.643478       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 19:03:07.643526       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 19:03:07.643566       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1027 19:03:07.643685       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-051715"
	I1027 19:03:07.643742       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1027 19:03:07.643801       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1027 19:03:07.643816       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 19:03:07.643904       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1027 19:03:07.643958       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1027 19:03:07.644320       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1027 19:03:07.644355       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 19:03:07.644382       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 19:03:07.644438       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1027 19:03:07.645342       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1027 19:03:07.645344       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1027 19:03:07.645434       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 19:03:07.649199       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 19:03:07.649266       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 19:03:07.650901       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 19:03:07.656211       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 19:03:07.671614       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1027 19:03:07.678954       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 19:03:07.685369       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 19:03:22.645395       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [a9eedc1ccb2109a87a2aea6540fc373696611eb9f1889e1c83f55d117e881710] <==
	I1027 19:04:03.406061       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1027 19:04:03.406115       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1027 19:04:03.407183       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 19:04:03.408390       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 19:04:03.410821       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 19:04:03.410825       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 19:04:03.412036       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1027 19:04:03.412588       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 19:04:03.412689       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1027 19:04:03.412825       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1027 19:04:03.412834       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1027 19:04:03.412843       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1027 19:04:03.416635       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1027 19:04:03.416694       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1027 19:04:03.416788       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 19:04:03.418357       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1027 19:04:03.421121       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1027 19:04:03.421196       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 19:04:03.424548       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1027 19:04:43.585916       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 19:04:43.591639       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 19:04:43.591664       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 19:04:43.595389       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 19:04:43.597753       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1027 19:04:43.600963       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [365d419dc66a8221ff22ac0d10b81df19be19785e1b72340ea20a52e846f75e5] <==
	I1027 19:03:46.231507       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E1027 19:03:46.232561       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-051715&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 19:03:47.545253       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-051715&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 19:03:49.549377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-051715&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 19:03:55.722277       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-051715&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1027 19:04:04.431993       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 19:04:04.432034       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1027 19:04:04.432126       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 19:04:04.454690       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 19:04:04.454758       1 server_linux.go:132] "Using iptables Proxier"
	I1027 19:04:04.461985       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 19:04:04.462366       1 server.go:527] "Version info" version="v1.34.1"
	I1027 19:04:04.462393       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:04:04.463969       1 config.go:200] "Starting service config controller"
	I1027 19:04:04.463998       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 19:04:04.464001       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 19:04:04.464023       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 19:04:04.464096       1 config.go:309] "Starting node config controller"
	I1027 19:04:04.464102       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 19:04:04.464081       1 config.go:106] "Starting endpoint slice config controller"
	I1027 19:04:04.464161       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 19:04:04.564237       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 19:04:04.564282       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 19:04:04.564259       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 19:04:04.565472       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [721621ca8825cb1e7ccd38c083123c015932656e67e91f27c4bfe06b0eb372f4] <==
	I1027 19:03:08.864523       1 server_linux.go:53] "Using iptables proxy"
	I1027 19:03:08.927028       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 19:03:09.027534       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 19:03:09.027585       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1027 19:03:09.027668       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 19:03:09.048154       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 19:03:09.048232       1 server_linux.go:132] "Using iptables Proxier"
	I1027 19:03:09.054474       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 19:03:09.054929       1 server.go:527] "Version info" version="v1.34.1"
	I1027 19:03:09.054967       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:03:09.056407       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 19:03:09.056432       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 19:03:09.056437       1 config.go:200] "Starting service config controller"
	I1027 19:03:09.056465       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 19:03:09.056497       1 config.go:106] "Starting endpoint slice config controller"
	I1027 19:03:09.056505       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 19:03:09.056522       1 config.go:309] "Starting node config controller"
	I1027 19:03:09.056538       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 19:03:09.056545       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 19:03:09.157523       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 19:03:09.157530       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 19:03:09.157655       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [ae05964b7150f9b1912680cf283b31b63deb90e6d192234608e740fc3417a483] <==
	E1027 19:03:51.398554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 19:03:51.478636       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 19:03:51.485249       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 19:03:51.658666       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 19:03:51.990623       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1027 19:03:54.573669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 19:03:54.623209       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 19:03:54.645784       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 19:03:54.926447       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 19:03:55.169318       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 19:03:55.361698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 19:03:55.609520       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 19:03:55.841990       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 19:03:55.996564       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1027 19:03:56.293957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 19:03:56.312700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 19:03:56.344359       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 19:03:56.351036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 19:03:56.490880       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 19:03:56.513460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 19:03:56.617708       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 19:03:56.658822       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 19:03:56.884274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 19:03:56.964502       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1027 19:04:05.452591       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [d58572f13c4a482f62b0cd52cf03092e21343183c9ed749c6beeaf4b3e1d0dbe] <==
	E1027 19:03:00.710284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 19:03:00.710331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 19:03:00.710413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 19:03:00.710663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 19:03:00.710857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 19:03:00.710857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 19:03:00.711003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 19:03:00.711177       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 19:03:01.610839       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 19:03:01.641988       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 19:03:01.654257       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 19:03:01.680353       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 19:03:01.704114       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 19:03:01.770692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 19:03:01.784935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 19:03:01.804259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 19:03:01.859790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1027 19:03:01.938270       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1027 19:03:04.705999       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:03:45.161781       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1027 19:03:45.162510       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1027 19:03:45.162020       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:03:45.161969       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1027 19:03:45.166169       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1027 19:03:45.166213       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Oct 27 19:12:12 functional-051715 kubelet[4114]: E1027 19:12:12.896576    4114 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bv2lb" podUID="042f92b9-330e-43cb-ba7d-faddade6b34b"
	Oct 27 19:12:19 functional-051715 kubelet[4114]: E1027 19:12:19.896799    4114 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-kgd6p" podUID="93eec8db-ab42-473a-a900-c603220cfd41"
	Oct 27 19:12:23 functional-051715 kubelet[4114]: E1027 19:12:23.898272    4114 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bv2lb" podUID="042f92b9-330e-43cb-ba7d-faddade6b34b"
	Oct 27 19:12:33 functional-051715 kubelet[4114]: E1027 19:12:33.896726    4114 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-kgd6p" podUID="93eec8db-ab42-473a-a900-c603220cfd41"
	Oct 27 19:12:34 functional-051715 kubelet[4114]: E1027 19:12:34.896924    4114 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bv2lb" podUID="042f92b9-330e-43cb-ba7d-faddade6b34b"
	Oct 27 19:12:45 functional-051715 kubelet[4114]: E1027 19:12:45.896918    4114 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-kgd6p" podUID="93eec8db-ab42-473a-a900-c603220cfd41"
	Oct 27 19:12:47 functional-051715 kubelet[4114]: E1027 19:12:47.897591    4114 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bv2lb" podUID="042f92b9-330e-43cb-ba7d-faddade6b34b"
	Oct 27 19:12:56 functional-051715 kubelet[4114]: E1027 19:12:56.896451    4114 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-kgd6p" podUID="93eec8db-ab42-473a-a900-c603220cfd41"
	Oct 27 19:13:02 functional-051715 kubelet[4114]: E1027 19:13:02.896546    4114 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bv2lb" podUID="042f92b9-330e-43cb-ba7d-faddade6b34b"
	Oct 27 19:13:11 functional-051715 kubelet[4114]: E1027 19:13:11.896868    4114 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-kgd6p" podUID="93eec8db-ab42-473a-a900-c603220cfd41"
	Oct 27 19:13:13 functional-051715 kubelet[4114]: E1027 19:13:13.896682    4114 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bv2lb" podUID="042f92b9-330e-43cb-ba7d-faddade6b34b"
	Oct 27 19:13:23 functional-051715 kubelet[4114]: E1027 19:13:23.896084    4114 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-kgd6p" podUID="93eec8db-ab42-473a-a900-c603220cfd41"
	Oct 27 19:13:27 functional-051715 kubelet[4114]: E1027 19:13:27.897262    4114 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bv2lb" podUID="042f92b9-330e-43cb-ba7d-faddade6b34b"
	Oct 27 19:13:37 functional-051715 kubelet[4114]: E1027 19:13:37.897075    4114 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-kgd6p" podUID="93eec8db-ab42-473a-a900-c603220cfd41"
	Oct 27 19:13:39 functional-051715 kubelet[4114]: E1027 19:13:39.896568    4114 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bv2lb" podUID="042f92b9-330e-43cb-ba7d-faddade6b34b"
	Oct 27 19:13:48 functional-051715 kubelet[4114]: E1027 19:13:48.896176    4114 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-kgd6p" podUID="93eec8db-ab42-473a-a900-c603220cfd41"
	Oct 27 19:13:51 functional-051715 kubelet[4114]: E1027 19:13:51.896955    4114 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bv2lb" podUID="042f92b9-330e-43cb-ba7d-faddade6b34b"
	Oct 27 19:14:02 functional-051715 kubelet[4114]: E1027 19:14:02.896813    4114 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-kgd6p" podUID="93eec8db-ab42-473a-a900-c603220cfd41"
	Oct 27 19:14:04 functional-051715 kubelet[4114]: E1027 19:14:04.896784    4114 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bv2lb" podUID="042f92b9-330e-43cb-ba7d-faddade6b34b"
	Oct 27 19:14:16 functional-051715 kubelet[4114]: E1027 19:14:16.896718    4114 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-kgd6p" podUID="93eec8db-ab42-473a-a900-c603220cfd41"
	Oct 27 19:14:19 functional-051715 kubelet[4114]: E1027 19:14:19.896560    4114 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bv2lb" podUID="042f92b9-330e-43cb-ba7d-faddade6b34b"
	Oct 27 19:14:27 functional-051715 kubelet[4114]: E1027 19:14:27.897430    4114 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-kgd6p" podUID="93eec8db-ab42-473a-a900-c603220cfd41"
	Oct 27 19:14:34 functional-051715 kubelet[4114]: E1027 19:14:34.896548    4114 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bv2lb" podUID="042f92b9-330e-43cb-ba7d-faddade6b34b"
	Oct 27 19:14:40 functional-051715 kubelet[4114]: E1027 19:14:40.896834    4114 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-kgd6p" podUID="93eec8db-ab42-473a-a900-c603220cfd41"
	Oct 27 19:14:47 functional-051715 kubelet[4114]: E1027 19:14:47.898614    4114 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-bv2lb" podUID="042f92b9-330e-43cb-ba7d-faddade6b34b"
	
	
	==> kubernetes-dashboard [478c824828a56ff9eabce542bcd39b0fb98a288c3fb2c5aad76edd8ff08fd050] <==
	2025/10/27 19:04:50 Starting overwatch
	2025/10/27 19:04:50 Using namespace: kubernetes-dashboard
	2025/10/27 19:04:50 Using in-cluster config to connect to apiserver
	2025/10/27 19:04:50 Using secret token for csrf signing
	2025/10/27 19:04:50 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/27 19:04:50 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/27 19:04:50 Successful initial request to the apiserver, version: v1.34.1
	2025/10/27 19:04:50 Generating JWE encryption key
	2025/10/27 19:04:50 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/27 19:04:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/27 19:04:50 Initializing JWE encryption key from synchronized object
	2025/10/27 19:04:50 Creating in-cluster Sidecar client
	2025/10/27 19:04:50 Successful request to sidecar
	2025/10/27 19:04:50 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [c4e0eb5c036971070a7a7519de6ec4acefeb2eaeaed46596180f2e7e1d6854fa] <==
	I1027 19:03:19.948753       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-051715_8dac99ea-91c2-4d1d-90a0-dace1cccfc6e!
	W1027 19:03:21.863080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:03:21.868859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:03:23.872191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:03:23.876575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:03:25.879665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:03:25.885065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:03:27.889005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:03:27.893568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:03:29.897482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:03:29.902376       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:03:31.906334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:03:31.910877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:03:33.914267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:03:33.918907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:03:35.922703       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:03:35.927017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:03:37.930384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:03:37.936664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:03:39.940413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:03:39.944851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:03:41.947910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:03:41.952465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:03:43.956006       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:03:43.962031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [e41295fadac04cf2c56412da6e8a91d636b1fc51e0fea7da3cd80db4edfe76f3] <==
	W1027 19:14:27.715089       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:14:29.717899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:14:29.721855       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:14:31.725291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:14:31.729845       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:14:33.733366       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:14:33.737602       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:14:35.741052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:14:35.744971       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:14:37.748959       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:14:37.753644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:14:39.756935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:14:39.762594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:14:41.765877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:14:41.770168       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:14:43.773381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:14:43.777233       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:14:45.780771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:14:45.784626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:14:47.788313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:14:47.793727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:14:49.797342       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:14:49.801674       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:14:51.804857       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:14:51.808963       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-051715 -n functional-051715
helpers_test.go:269: (dbg) Run:  kubectl --context functional-051715 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-bv2lb hello-node-connect-7d85dfc575-kgd6p
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-051715 describe pod busybox-mount hello-node-75c85bcc94-bv2lb hello-node-connect-7d85dfc575-kgd6p
helpers_test.go:290: (dbg) kubectl --context functional-051715 describe pod busybox-mount hello-node-75c85bcc94-bv2lb hello-node-connect-7d85dfc575-kgd6p:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-051715/192.168.49.2
	Start Time:       Mon, 27 Oct 2025 19:04:28 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  mount-munger:
	    Container ID:  cri-o://d376911fa2466f896d8e56da249259e773d3b0430b2bf96350167eaf145ee04b
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 27 Oct 2025 19:04:32 +0000
	      Finished:     Mon, 27 Oct 2025 19:04:32 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cm6jp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-cm6jp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-051715
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 710ms (830ms including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-bv2lb
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-051715/192.168.49.2
	Start Time:       Mon, 27 Oct 2025 19:04:24 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7hn6w (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-7hn6w:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-bv2lb to functional-051715
	  Normal   Pulling    7m25s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m25s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m25s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    20s (x43 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     20s (x43 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-kgd6p
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-051715/192.168.49.2
	Start Time:       Mon, 27 Oct 2025 19:04:51 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.12
	IPs:
	  IP:           10.244.0.12
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kh66p (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-kh66p:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-kgd6p to functional-051715
	  Normal   Pulling    7m13s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m13s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m13s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m54s (x22 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m54s (x22 over 10m)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-051715 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-051715 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-bv2lb" [042f92b9-330e-43cb-ba7d-faddade6b34b] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-051715 -n functional-051715
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-27 19:14:24.55372295 +0000 UTC m=+1088.029140143
functional_test.go:1460: (dbg) Run:  kubectl --context functional-051715 describe po hello-node-75c85bcc94-bv2lb -n default
functional_test.go:1460: (dbg) kubectl --context functional-051715 describe po hello-node-75c85bcc94-bv2lb -n default:
Name:             hello-node-75c85bcc94-bv2lb
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-051715/192.168.49.2
Start Time:       Mon, 27 Oct 2025 19:04:24 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7hn6w (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-7hn6w:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-bv2lb to functional-051715
Normal   Pulling    6m55s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m55s (x5 over 10m)     kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m55s (x5 over 10m)     kubelet            Error: ErrImagePull
Warning  Failed     4m58s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m47s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-051715 logs hello-node-75c85bcc94-bv2lb -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-051715 logs hello-node-75c85bcc94-bv2lb -n default: exit status 1 (68.920774ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-bv2lb" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-051715 logs hello-node-75c85bcc94-bv2lb -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 image load --daemon kicbase/echo-server:functional-051715 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-051715" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 image load --daemon kicbase/echo-server:functional-051715 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-051715" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-051715
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 image load --daemon kicbase/echo-server:functional-051715 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-051715" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 image save kicbase/echo-server:functional-051715 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1027 19:04:55.782677  395851 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:04:55.782956  395851 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:04:55.782966  395851 out.go:374] Setting ErrFile to fd 2...
	I1027 19:04:55.782970  395851 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:04:55.783355  395851 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 19:04:55.784020  395851 config.go:182] Loaded profile config "functional-051715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:04:55.784125  395851 config.go:182] Loaded profile config "functional-051715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:04:55.784518  395851 cli_runner.go:164] Run: docker container inspect functional-051715 --format={{.State.Status}}
	I1027 19:04:55.803765  395851 ssh_runner.go:195] Run: systemctl --version
	I1027 19:04:55.803818  395851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-051715
	I1027 19:04:55.822702  395851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/functional-051715/id_rsa Username:docker}
	I1027 19:04:55.922341  395851 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1027 19:04:55.922414  395851 cache_images.go:254] Failed to load cached images for "functional-051715": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1027 19:04:55.922446  395851 cache_images.go:266] failed pushing to: functional-051715

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-051715
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 image save --daemon kicbase/echo-server:functional-051715 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-051715
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-051715: exit status 1 (18.526182ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-051715

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-051715

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-051715 service --namespace=default --https --url hello-node: exit status 115 (553.295872ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:32068
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-051715 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-051715 service hello-node --url --format={{.IP}}: exit status 115 (558.096326ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-051715 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-051715 service hello-node --url: exit status 115 (555.557779ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:32068
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-051715 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32068
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.56s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.34s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-895727 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-895727 --output=json --user=testUser: exit status 80 (2.340185171s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cfe5d551-49f6-4c84-9be1-e2cbf47afb52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-895727 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"6e54a2ad-e5ba-4453-a1a8-bf7530441728","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-27T19:23:37Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"460750d2-5744-4392-9234-fa94d0fdb7c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-895727 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.34s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.77s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-895727 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-895727 --output=json --user=testUser: exit status 80 (1.764935843s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ff0338dd-5f0b-49a8-9792-849deb466ca9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-895727 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"1061b811-47ea-4994-8128-9cecc2eac70c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-27T19:23:39Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"e846b24d-5852-405e-a9ee-faf815cc0db6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-895727 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.77s)

                                                
                                    
x
+
TestPause/serial/Pause (6.62s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-249140 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-249140 --alsologtostderr -v=5: exit status 80 (1.975250823s)

                                                
                                                
-- stdout --
	* Pausing node pause-249140 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:37:04.447580  542765 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:37:04.447891  542765 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:37:04.447911  542765 out.go:374] Setting ErrFile to fd 2...
	I1027 19:37:04.447918  542765 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:37:04.448303  542765 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 19:37:04.448674  542765 out.go:368] Setting JSON to false
	I1027 19:37:04.448748  542765 mustload.go:65] Loading cluster: pause-249140
	I1027 19:37:04.449235  542765 config.go:182] Loaded profile config "pause-249140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:37:04.449789  542765 cli_runner.go:164] Run: docker container inspect pause-249140 --format={{.State.Status}}
	I1027 19:37:04.474407  542765 host.go:66] Checking if "pause-249140" exists ...
	I1027 19:37:04.474828  542765 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:37:04.564286  542765 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-27 19:37:04.548087732 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:37:04.565284  542765 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-249140 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1027 19:37:04.567644  542765 out.go:179] * Pausing node pause-249140 ... 
	I1027 19:37:04.569674  542765 host.go:66] Checking if "pause-249140" exists ...
	I1027 19:37:04.570013  542765 ssh_runner.go:195] Run: systemctl --version
	I1027 19:37:04.570080  542765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-249140
	I1027 19:37:04.594866  542765 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33360 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/pause-249140/id_rsa Username:docker}
	I1027 19:37:04.706616  542765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:37:04.726403  542765 pause.go:52] kubelet running: true
	I1027 19:37:04.726485  542765 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 19:37:04.898507  542765 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 19:37:04.898664  542765 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 19:37:04.988886  542765 cri.go:89] found id: "a5d43c06cdefd0fd790cb0418ec7193d78de34b9aa196d7434e89fa6e058a9e2"
	I1027 19:37:04.988917  542765 cri.go:89] found id: "23504db13cbd1fd12a985de0d72ca202ac317afa3c2b2e13010bc502e000e818"
	I1027 19:37:04.988921  542765 cri.go:89] found id: "01bf760b3e7b21a98d5df158a80b1c0b879013421d7c5e47ff7903915caf96a9"
	I1027 19:37:04.988925  542765 cri.go:89] found id: "d69010095e3eba77e809b777fa9e622cf5c9528a2eab5611100fa5eed6283461"
	I1027 19:37:04.988927  542765 cri.go:89] found id: "c5b2eb2a54f889f17b3db8afb09c190f60784cb1f08c460017039d3d947aeaaf"
	I1027 19:37:04.988930  542765 cri.go:89] found id: "01712f1073762c52020031153783123eaffdca1ca62a7f9798f8eee04cb57fd9"
	I1027 19:37:04.988933  542765 cri.go:89] found id: "e4828303cd2a90f2436dec99343b7ffa44a1eb586b82513fc0a7a01f1a37cd0d"
	I1027 19:37:04.988935  542765 cri.go:89] found id: ""
	I1027 19:37:04.988977  542765 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:37:05.005283  542765 retry.go:31] will retry after 310.088007ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:37:05Z" level=error msg="open /run/runc: no such file or directory"
	I1027 19:37:05.315777  542765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:37:05.331580  542765 pause.go:52] kubelet running: false
	I1027 19:37:05.331646  542765 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 19:37:05.458453  542765 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 19:37:05.458545  542765 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 19:37:05.544390  542765 cri.go:89] found id: "a5d43c06cdefd0fd790cb0418ec7193d78de34b9aa196d7434e89fa6e058a9e2"
	I1027 19:37:05.544413  542765 cri.go:89] found id: "23504db13cbd1fd12a985de0d72ca202ac317afa3c2b2e13010bc502e000e818"
	I1027 19:37:05.544417  542765 cri.go:89] found id: "01bf760b3e7b21a98d5df158a80b1c0b879013421d7c5e47ff7903915caf96a9"
	I1027 19:37:05.544420  542765 cri.go:89] found id: "d69010095e3eba77e809b777fa9e622cf5c9528a2eab5611100fa5eed6283461"
	I1027 19:37:05.544423  542765 cri.go:89] found id: "c5b2eb2a54f889f17b3db8afb09c190f60784cb1f08c460017039d3d947aeaaf"
	I1027 19:37:05.544425  542765 cri.go:89] found id: "01712f1073762c52020031153783123eaffdca1ca62a7f9798f8eee04cb57fd9"
	I1027 19:37:05.544428  542765 cri.go:89] found id: "e4828303cd2a90f2436dec99343b7ffa44a1eb586b82513fc0a7a01f1a37cd0d"
	I1027 19:37:05.544430  542765 cri.go:89] found id: ""
	I1027 19:37:05.544480  542765 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:37:05.557750  542765 retry.go:31] will retry after 514.36924ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:37:05Z" level=error msg="open /run/runc: no such file or directory"
	I1027 19:37:06.072286  542765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:37:06.093521  542765 pause.go:52] kubelet running: false
	I1027 19:37:06.093649  542765 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 19:37:06.219177  542765 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 19:37:06.219285  542765 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 19:37:06.299696  542765 cri.go:89] found id: "a5d43c06cdefd0fd790cb0418ec7193d78de34b9aa196d7434e89fa6e058a9e2"
	I1027 19:37:06.299723  542765 cri.go:89] found id: "23504db13cbd1fd12a985de0d72ca202ac317afa3c2b2e13010bc502e000e818"
	I1027 19:37:06.299729  542765 cri.go:89] found id: "01bf760b3e7b21a98d5df158a80b1c0b879013421d7c5e47ff7903915caf96a9"
	I1027 19:37:06.299754  542765 cri.go:89] found id: "d69010095e3eba77e809b777fa9e622cf5c9528a2eab5611100fa5eed6283461"
	I1027 19:37:06.299759  542765 cri.go:89] found id: "c5b2eb2a54f889f17b3db8afb09c190f60784cb1f08c460017039d3d947aeaaf"
	I1027 19:37:06.299763  542765 cri.go:89] found id: "01712f1073762c52020031153783123eaffdca1ca62a7f9798f8eee04cb57fd9"
	I1027 19:37:06.299768  542765 cri.go:89] found id: "e4828303cd2a90f2436dec99343b7ffa44a1eb586b82513fc0a7a01f1a37cd0d"
	I1027 19:37:06.299773  542765 cri.go:89] found id: ""
	I1027 19:37:06.299819  542765 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:37:06.314976  542765 out.go:203] 
	W1027 19:37:06.316367  542765 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:37:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:37:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 19:37:06.316392  542765 out.go:285] * 
	* 
	W1027 19:37:06.321062  542765 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 19:37:06.322258  542765 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-249140 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-249140
helpers_test.go:243: (dbg) docker inspect pause-249140:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6efc7283d4ecb5ee4fad19e014e8a76a8c44fbbe811100a649401833144cfab8",
	        "Created": "2025-10-27T19:36:14.956368426Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 527537,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T19:36:15.01895205Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/6efc7283d4ecb5ee4fad19e014e8a76a8c44fbbe811100a649401833144cfab8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6efc7283d4ecb5ee4fad19e014e8a76a8c44fbbe811100a649401833144cfab8/hostname",
	        "HostsPath": "/var/lib/docker/containers/6efc7283d4ecb5ee4fad19e014e8a76a8c44fbbe811100a649401833144cfab8/hosts",
	        "LogPath": "/var/lib/docker/containers/6efc7283d4ecb5ee4fad19e014e8a76a8c44fbbe811100a649401833144cfab8/6efc7283d4ecb5ee4fad19e014e8a76a8c44fbbe811100a649401833144cfab8-json.log",
	        "Name": "/pause-249140",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-249140:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-249140",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6efc7283d4ecb5ee4fad19e014e8a76a8c44fbbe811100a649401833144cfab8",
	                "LowerDir": "/var/lib/docker/overlay2/49bf847aabcc6b5107a816bebafcb4ca855291acf255a2bb30c0ce7ed8e23ddb-init/diff:/var/lib/docker/overlay2/71b61ec94610a35f2d924dec358052d4c154c36b3fe219802f60246ca2dc7f45/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49bf847aabcc6b5107a816bebafcb4ca855291acf255a2bb30c0ce7ed8e23ddb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49bf847aabcc6b5107a816bebafcb4ca855291acf255a2bb30c0ce7ed8e23ddb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49bf847aabcc6b5107a816bebafcb4ca855291acf255a2bb30c0ce7ed8e23ddb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-249140",
	                "Source": "/var/lib/docker/volumes/pause-249140/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-249140",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-249140",
	                "name.minikube.sigs.k8s.io": "pause-249140",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "136d01220ee58f0bd343a3d7364eafd259f5bdcfcd6c77ef3dcfa9d2029195f7",
	            "SandboxKey": "/var/run/docker/netns/136d01220ee5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33360"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33361"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33364"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33362"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33363"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-249140": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:d8:28:05:bc:3e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "eded7341b4a41d9b7051989e017c507ac38ddb8f71aeab44145a72dd52b221a7",
	                    "EndpointID": "c1cee8e61bde4e11e9f4a417f4ff324b3bf5a909d547eebb458047921b8ae57d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-249140",
	                        "6efc7283d4ec"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-249140 -n pause-249140
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-249140 -n pause-249140: exit status 2 (423.712544ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-249140 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-249140 logs -n 25: (1.376792144s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                           ARGS                                                                                                            │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-706609 --schedule 5m                                                                                                                                                                                    │ scheduled-stop-706609       │ jenkins │ v1.37.0 │ 27 Oct 25 19:34 UTC │                     │
	│ stop    │ -p scheduled-stop-706609 --schedule 5m                                                                                                                                                                                    │ scheduled-stop-706609       │ jenkins │ v1.37.0 │ 27 Oct 25 19:34 UTC │                     │
	│ stop    │ -p scheduled-stop-706609 --schedule 15s                                                                                                                                                                                   │ scheduled-stop-706609       │ jenkins │ v1.37.0 │ 27 Oct 25 19:34 UTC │                     │
	│ stop    │ -p scheduled-stop-706609 --schedule 15s                                                                                                                                                                                   │ scheduled-stop-706609       │ jenkins │ v1.37.0 │ 27 Oct 25 19:34 UTC │                     │
	│ stop    │ -p scheduled-stop-706609 --schedule 15s                                                                                                                                                                                   │ scheduled-stop-706609       │ jenkins │ v1.37.0 │ 27 Oct 25 19:34 UTC │                     │
	│ stop    │ -p scheduled-stop-706609 --cancel-scheduled                                                                                                                                                                               │ scheduled-stop-706609       │ jenkins │ v1.37.0 │ 27 Oct 25 19:34 UTC │ 27 Oct 25 19:34 UTC │
	│ stop    │ -p scheduled-stop-706609 --schedule 15s                                                                                                                                                                                   │ scheduled-stop-706609       │ jenkins │ v1.37.0 │ 27 Oct 25 19:35 UTC │                     │
	│ stop    │ -p scheduled-stop-706609 --schedule 15s                                                                                                                                                                                   │ scheduled-stop-706609       │ jenkins │ v1.37.0 │ 27 Oct 25 19:35 UTC │                     │
	│ stop    │ -p scheduled-stop-706609 --schedule 15s                                                                                                                                                                                   │ scheduled-stop-706609       │ jenkins │ v1.37.0 │ 27 Oct 25 19:35 UTC │ 27 Oct 25 19:35 UTC │
	│ delete  │ -p scheduled-stop-706609                                                                                                                                                                                                  │ scheduled-stop-706609       │ jenkins │ v1.37.0 │ 27 Oct 25 19:35 UTC │ 27 Oct 25 19:35 UTC │
	│ start   │ -p insufficient-storage-321540 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                                                                                                          │ insufficient-storage-321540 │ jenkins │ v1.37.0 │ 27 Oct 25 19:35 UTC │                     │
	│ delete  │ -p insufficient-storage-321540                                                                                                                                                                                            │ insufficient-storage-321540 │ jenkins │ v1.37.0 │ 27 Oct 25 19:36 UTC │ 27 Oct 25 19:36 UTC │
	│ start   │ -p pause-249140 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                                                                                                 │ pause-249140                │ jenkins │ v1.37.0 │ 27 Oct 25 19:36 UTC │ 27 Oct 25 19:36 UTC │
	│ start   │ -p force-systemd-env-282715 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                │ force-systemd-env-282715    │ jenkins │ v1.37.0 │ 27 Oct 25 19:36 UTC │ 27 Oct 25 19:36 UTC │
	│ start   │ -p offline-crio-221701 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                                                                                                         │ offline-crio-221701         │ jenkins │ v1.37.0 │ 27 Oct 25 19:36 UTC │ 27 Oct 25 19:36 UTC │
	│ start   │ -p force-systemd-flag-422872 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                               │ force-systemd-flag-422872   │ jenkins │ v1.37.0 │ 27 Oct 25 19:36 UTC │ 27 Oct 25 19:36 UTC │
	│ delete  │ -p force-systemd-env-282715                                                                                                                                                                                               │ force-systemd-env-282715    │ jenkins │ v1.37.0 │ 27 Oct 25 19:36 UTC │ 27 Oct 25 19:36 UTC │
	│ start   │ -p cert-expiration-368442 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                    │ cert-expiration-368442      │ jenkins │ v1.37.0 │ 27 Oct 25 19:36 UTC │ 27 Oct 25 19:37 UTC │
	│ ssh     │ force-systemd-flag-422872 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                      │ force-systemd-flag-422872   │ jenkins │ v1.37.0 │ 27 Oct 25 19:36 UTC │ 27 Oct 25 19:36 UTC │
	│ delete  │ -p force-systemd-flag-422872                                                                                                                                                                                              │ force-systemd-flag-422872   │ jenkins │ v1.37.0 │ 27 Oct 25 19:36 UTC │ 27 Oct 25 19:36 UTC │
	│ start   │ -p cert-options-638768 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio │ cert-options-638768         │ jenkins │ v1.37.0 │ 27 Oct 25 19:36 UTC │                     │
	│ start   │ -p pause-249140 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                          │ pause-249140                │ jenkins │ v1.37.0 │ 27 Oct 25 19:36 UTC │ 27 Oct 25 19:37 UTC │
	│ delete  │ -p offline-crio-221701                                                                                                                                                                                                    │ offline-crio-221701         │ jenkins │ v1.37.0 │ 27 Oct 25 19:36 UTC │ 27 Oct 25 19:36 UTC │
	│ start   │ -p missing-upgrade-345161 --memory=3072 --driver=docker  --container-runtime=crio                                                                                                                                         │ missing-upgrade-345161      │ jenkins │ v1.32.0 │ 27 Oct 25 19:37 UTC │                     │
	│ pause   │ -p pause-249140 --alsologtostderr -v=5                                                                                                                                                                                    │ pause-249140                │ jenkins │ v1.37.0 │ 27 Oct 25 19:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 19:37:00
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 19:37:00.474478  541957 out.go:296] Setting OutFile to fd 1 ...
	I1027 19:37:00.474657  541957 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1027 19:37:00.474663  541957 out.go:309] Setting ErrFile to fd 2...
	I1027 19:37:00.474669  541957 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1027 19:37:00.474931  541957 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 19:37:00.475516  541957 out.go:303] Setting JSON to false
	I1027 19:37:00.476977  541957 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8369,"bootTime":1761585451,"procs":309,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 19:37:00.477044  541957 start.go:138] virtualization: kvm guest
	I1027 19:37:00.479463  541957 out.go:177] * [missing-upgrade-345161] minikube v1.32.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 19:37:00.481799  541957 out.go:177]   - MINIKUBE_LOCATION=21801
	I1027 19:37:00.481839  541957 notify.go:220] Checking for updates...
	I1027 19:37:00.486707  541957 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 19:37:00.487885  541957 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:37:00.491854  541957 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-352833/.minikube
	I1027 19:37:00.493380  541957 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 19:37:00.494861  541957 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 19:37:00.496974  541957 config.go:182] Loaded profile config "cert-expiration-368442": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:37:00.497129  541957 config.go:182] Loaded profile config "cert-options-638768": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:37:00.497335  541957 config.go:182] Loaded profile config "pause-249140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:37:00.497452  541957 driver.go:378] Setting default libvirt URI to qemu:///system
	I1027 19:37:00.528832  541957 docker.go:122] docker version: linux-28.5.1:Docker Engine - Community
	I1027 19:37:00.528961  541957 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:37:00.569061  541957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/last_update_check: {Name:mke3f866af514ce2abb811772b393ca67a8a2fe8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:37:00.573446  541957 out.go:177] * minikube 1.37.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.37.0
	I1027 19:37:00.575723  541957 out.go:177] * To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	I1027 19:37:00.601823  541957 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-27 19:37:00.588989899 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:37:00.601971  541957 docker.go:295] overlay module found
	I1027 19:37:00.603850  541957 out.go:177] * Using the docker driver based on user configuration
	I1027 19:36:57.147771  540831 out.go:252] * Updating the running docker "pause-249140" container ...
	I1027 19:36:57.147836  540831 machine.go:93] provisionDockerMachine start ...
	I1027 19:36:57.147929  540831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-249140
	I1027 19:36:57.173511  540831 main.go:141] libmachine: Using SSH client type: native
	I1027 19:36:57.173931  540831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33360 <nil> <nil>}
	I1027 19:36:57.173951  540831 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 19:36:57.331636  540831 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-249140
	
	I1027 19:36:57.332885  540831 ubuntu.go:182] provisioning hostname "pause-249140"
	I1027 19:36:57.332990  540831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-249140
	I1027 19:36:57.358051  540831 main.go:141] libmachine: Using SSH client type: native
	I1027 19:36:57.358427  540831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33360 <nil> <nil>}
	I1027 19:36:57.358453  540831 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-249140 && echo "pause-249140" | sudo tee /etc/hostname
	I1027 19:36:57.543331  540831 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-249140
	
	I1027 19:36:57.543460  540831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-249140
	I1027 19:36:57.566417  540831 main.go:141] libmachine: Using SSH client type: native
	I1027 19:36:57.566776  540831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33360 <nil> <nil>}
	I1027 19:36:57.566801  540831 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-249140' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-249140/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-249140' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 19:36:57.718185  540831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 19:36:57.718226  540831 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21801-352833/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-352833/.minikube}
	I1027 19:36:57.718254  540831 ubuntu.go:190] setting up certificates
	I1027 19:36:57.718271  540831 provision.go:84] configureAuth start
	I1027 19:36:57.718346  540831 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-249140
	I1027 19:36:57.745346  540831 provision.go:143] copyHostCerts
	I1027 19:36:57.745432  540831 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem, removing ...
	I1027 19:36:57.745453  540831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem
	I1027 19:36:57.745539  540831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem (1078 bytes)
	I1027 19:36:57.745804  540831 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem, removing ...
	I1027 19:36:57.745821  540831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem
	I1027 19:36:57.745862  540831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem (1123 bytes)
	I1027 19:36:57.745950  540831 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem, removing ...
	I1027 19:36:57.745972  540831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem
	I1027 19:36:57.746009  540831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem (1679 bytes)
	I1027 19:36:57.746079  540831 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem org=jenkins.pause-249140 san=[127.0.0.1 192.168.85.2 localhost minikube pause-249140]
	I1027 19:36:57.840076  540831 provision.go:177] copyRemoteCerts
	I1027 19:36:57.840160  540831 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 19:36:57.840206  540831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-249140
	I1027 19:36:57.866752  540831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33360 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/pause-249140/id_rsa Username:docker}
	I1027 19:36:57.976345  540831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 19:36:58.005503  540831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1027 19:36:58.028851  540831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1027 19:36:58.052167  540831 provision.go:87] duration metric: took 333.878373ms to configureAuth
	I1027 19:36:58.052201  540831 ubuntu.go:206] setting minikube options for container-runtime
	I1027 19:36:58.052469  540831 config.go:182] Loaded profile config "pause-249140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:36:58.052598  540831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-249140
	I1027 19:36:58.078386  540831 main.go:141] libmachine: Using SSH client type: native
	I1027 19:36:58.078755  540831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33360 <nil> <nil>}
	I1027 19:36:58.078780  540831 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 19:36:58.468658  540831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 19:36:58.468688  540831 machine.go:96] duration metric: took 1.320841118s to provisionDockerMachine
	I1027 19:36:58.468703  540831 start.go:293] postStartSetup for "pause-249140" (driver="docker")
	I1027 19:36:58.468717  540831 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 19:36:58.468783  540831 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 19:36:58.468843  540831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-249140
	I1027 19:36:58.498376  540831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33360 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/pause-249140/id_rsa Username:docker}
	I1027 19:36:58.622114  540831 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 19:36:58.628150  540831 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 19:36:58.628186  540831 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 19:36:58.628201  540831 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-352833/.minikube/addons for local assets ...
	I1027 19:36:58.628266  540831 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-352833/.minikube/files for local assets ...
	I1027 19:36:58.628417  540831 filesync.go:149] local asset: /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem -> 3564152.pem in /etc/ssl/certs
	I1027 19:36:58.628541  540831 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 19:36:58.639507  540831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem --> /etc/ssl/certs/3564152.pem (1708 bytes)
	I1027 19:36:58.664721  540831 start.go:296] duration metric: took 195.998657ms for postStartSetup
	I1027 19:36:58.664811  540831 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 19:36:58.664866  540831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-249140
	I1027 19:36:58.689912  540831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33360 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/pause-249140/id_rsa Username:docker}
	I1027 19:36:58.804043  540831 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 19:36:58.809622  540831 fix.go:56] duration metric: took 1.688757512s for fixHost
	I1027 19:36:58.809659  540831 start.go:83] releasing machines lock for "pause-249140", held for 1.688822938s
	I1027 19:36:58.809767  540831 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-249140
	I1027 19:36:58.838499  540831 ssh_runner.go:195] Run: cat /version.json
	I1027 19:36:58.838579  540831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-249140
	I1027 19:36:58.838577  540831 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 19:36:58.838669  540831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-249140
	I1027 19:36:58.876710  540831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33360 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/pause-249140/id_rsa Username:docker}
	I1027 19:36:58.881500  540831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33360 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/pause-249140/id_rsa Username:docker}
	I1027 19:36:59.023681  540831 ssh_runner.go:195] Run: systemctl --version
	I1027 19:36:59.112886  540831 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 19:36:59.171573  540831 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 19:36:59.179120  540831 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 19:36:59.179523  540831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 19:36:59.191305  540831 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 19:36:59.191346  540831 start.go:495] detecting cgroup driver to use...
	I1027 19:36:59.191385  540831 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 19:36:59.191444  540831 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 19:36:59.212766  540831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 19:36:59.234549  540831 docker.go:218] disabling cri-docker service (if available) ...
	I1027 19:36:59.234620  540831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 19:36:59.258566  540831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 19:36:59.276038  540831 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 19:36:59.441754  540831 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 19:36:59.569403  540831 docker.go:234] disabling docker service ...
	I1027 19:36:59.569488  540831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 19:36:59.586591  540831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 19:36:59.602218  540831 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 19:36:59.732223  540831 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 19:36:59.850894  540831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 19:36:59.867171  540831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 19:36:59.884292  540831 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 19:36:59.884365  540831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:36:59.896091  540831 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 19:36:59.896182  540831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:36:59.907800  540831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:36:59.919130  540831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:36:59.929990  540831 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 19:36:59.941180  540831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:36:59.953872  540831 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:36:59.963765  540831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:36:59.974693  540831 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 19:36:59.984502  540831 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 19:36:59.993961  540831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:37:00.159904  540831 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 19:37:00.550789  540831 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 19:37:00.550895  540831 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 19:37:00.557111  540831 start.go:563] Will wait 60s for crictl version
	I1027 19:37:00.557368  540831 ssh_runner.go:195] Run: which crictl
	I1027 19:37:00.562638  540831 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 19:37:00.598455  540831 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 19:37:00.598558  540831 ssh_runner.go:195] Run: crio --version
	I1027 19:37:00.639740  540831 ssh_runner.go:195] Run: crio --version
	I1027 19:37:00.684118  540831 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 19:37:00.605643  541957 start.go:298] selected driver: docker
	I1027 19:37:00.605657  541957 start.go:902] validating driver "docker" against <nil>
	I1027 19:37:00.605673  541957 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 19:37:00.606602  541957 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:37:00.683677  541957 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-27 19:37:00.671550262 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:37:00.683894  541957 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1027 19:37:00.684113  541957 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1027 19:37:00.686214  541957 out.go:177] * Using Docker driver with root privileges
	I1027 19:37:00.687500  541957 cni.go:84] Creating CNI manager for ""
	I1027 19:37:00.687518  541957 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:37:00.687532  541957 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 19:37:00.687548  541957 start_flags.go:323] config:
	{Name:missing-upgrade-345161 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:missing-upgrade-345161 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1027 19:37:00.689153  541957 out.go:177] * Starting control plane node missing-upgrade-345161 in cluster missing-upgrade-345161
	I1027 19:37:00.690619  541957 cache.go:121] Beginning downloading kic base image for docker with crio
	I1027 19:37:00.692111  541957 out.go:177] * Pulling base image ...
	I1027 19:37:00.693410  541957 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1027 19:37:00.693506  541957 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1027 19:37:00.714531  541957 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 to local cache
	I1027 19:37:00.714750  541957 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local cache directory
	I1027 19:37:00.714783  541957 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 to local cache
	I1027 19:37:00.724647  541957 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1027 19:37:00.724675  541957 cache.go:56] Caching tarball of preloaded images
	I1027 19:37:00.724845  541957 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1027 19:37:00.726766  541957 out.go:177] * Downloading Kubernetes v1.28.3 preload ...
	I1027 19:37:00.685552  540831 cli_runner.go:164] Run: docker network inspect pause-249140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 19:37:00.707733  540831 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1027 19:37:00.713564  540831 kubeadm.go:883] updating cluster {Name:pause-249140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-249140 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 19:37:00.713756  540831 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:37:00.713827  540831 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 19:37:00.756808  540831 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 19:37:00.756830  540831 crio.go:433] Images already preloaded, skipping extraction
	I1027 19:37:00.756887  540831 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 19:37:00.788381  540831 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 19:37:00.788408  540831 cache_images.go:85] Images are preloaded, skipping loading
	I1027 19:37:00.788419  540831 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1027 19:37:00.788559  540831 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-249140 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-249140 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 19:37:00.788645  540831 ssh_runner.go:195] Run: crio config
	I1027 19:37:00.848546  540831 cni.go:84] Creating CNI manager for ""
	I1027 19:37:00.848571  540831 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:37:00.848590  540831 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 19:37:00.848614  540831 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-249140 NodeName:pause-249140 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 19:37:00.848774  540831 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-249140"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 19:37:00.848856  540831 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 19:37:00.860084  540831 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 19:37:00.860178  540831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 19:37:00.872191  540831 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1027 19:37:00.891234  540831 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 19:37:00.911565  540831 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1027 19:37:00.933714  540831 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1027 19:37:00.940653  540831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:37:01.130373  540831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:37:01.147750  540831 certs.go:69] Setting up /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/pause-249140 for IP: 192.168.85.2
	I1027 19:37:01.147780  540831 certs.go:195] generating shared ca certs ...
	I1027 19:37:01.147848  540831 certs.go:227] acquiring lock for ca certs: {Name:mk4bdbca32068f6f817fc35fdc496e961dc3e0d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:37:01.148021  540831 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21801-352833/.minikube/ca.key
	I1027 19:37:01.148098  540831 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.key
	I1027 19:37:01.148120  540831 certs.go:257] generating profile certs ...
	I1027 19:37:01.148287  540831 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/pause-249140/client.key
	I1027 19:37:01.148437  540831 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/pause-249140/apiserver.key.379a31ff
	I1027 19:37:01.148505  540831 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/pause-249140/proxy-client.key
	I1027 19:37:01.148668  540831 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/356415.pem (1338 bytes)
	W1027 19:37:01.148716  540831 certs.go:480] ignoring /home/jenkins/minikube-integration/21801-352833/.minikube/certs/356415_empty.pem, impossibly tiny 0 bytes
	I1027 19:37:01.148731  540831 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 19:37:01.148768  540831 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem (1078 bytes)
	I1027 19:37:01.148799  540831 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem (1123 bytes)
	I1027 19:37:01.148837  540831 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem (1679 bytes)
	I1027 19:37:01.148899  540831 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem (1708 bytes)
	I1027 19:37:01.149772  540831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 19:37:01.174155  540831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 19:37:01.198387  540831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 19:37:01.221001  540831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1027 19:37:01.249425  540831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/pause-249140/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1027 19:37:01.280257  540831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/pause-249140/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 19:37:01.315718  540831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/pause-249140/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 19:37:01.346274  540831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/pause-249140/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 19:37:01.374052  540831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/certs/356415.pem --> /usr/share/ca-certificates/356415.pem (1338 bytes)
	I1027 19:37:01.407365  540831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem --> /usr/share/ca-certificates/3564152.pem (1708 bytes)
	I1027 19:37:01.439147  540831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 19:37:01.473618  540831 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 19:37:01.498806  540831 ssh_runner.go:195] Run: openssl version
	I1027 19:37:01.508549  540831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 19:37:01.522213  540831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:37:01.529097  540831 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:37:01.529236  540831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:37:01.588229  540831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 19:37:01.600078  540831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/356415.pem && ln -fs /usr/share/ca-certificates/356415.pem /etc/ssl/certs/356415.pem"
	I1027 19:37:01.620497  540831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356415.pem
	I1027 19:37:01.626951  540831 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 19:02 /usr/share/ca-certificates/356415.pem
	I1027 19:37:01.627018  540831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356415.pem
	I1027 19:37:01.687327  540831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/356415.pem /etc/ssl/certs/51391683.0"
	I1027 19:37:01.699539  540831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3564152.pem && ln -fs /usr/share/ca-certificates/3564152.pem /etc/ssl/certs/3564152.pem"
	I1027 19:37:01.712941  540831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3564152.pem
	I1027 19:37:01.722987  540831 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 19:02 /usr/share/ca-certificates/3564152.pem
	I1027 19:37:01.723056  540831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3564152.pem
	I1027 19:37:01.784522  540831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3564152.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 19:37:01.799300  540831 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 19:37:01.809749  540831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 19:37:01.876415  540831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 19:37:01.942219  540831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 19:37:02.011634  540831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 19:37:02.084540  540831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 19:37:02.149946  540831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 19:37:02.213860  540831 kubeadm.go:400] StartCluster: {Name:pause-249140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-249140 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:37:02.214030  540831 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 19:37:02.214096  540831 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 19:37:02.265023  540831 cri.go:89] found id: "a5d43c06cdefd0fd790cb0418ec7193d78de34b9aa196d7434e89fa6e058a9e2"
	I1027 19:37:02.265053  540831 cri.go:89] found id: "23504db13cbd1fd12a985de0d72ca202ac317afa3c2b2e13010bc502e000e818"
	I1027 19:37:02.265059  540831 cri.go:89] found id: "01bf760b3e7b21a98d5df158a80b1c0b879013421d7c5e47ff7903915caf96a9"
	I1027 19:37:02.265063  540831 cri.go:89] found id: "d69010095e3eba77e809b777fa9e622cf5c9528a2eab5611100fa5eed6283461"
	I1027 19:37:02.265066  540831 cri.go:89] found id: "c5b2eb2a54f889f17b3db8afb09c190f60784cb1f08c460017039d3d947aeaaf"
	I1027 19:37:02.265070  540831 cri.go:89] found id: "01712f1073762c52020031153783123eaffdca1ca62a7f9798f8eee04cb57fd9"
	I1027 19:37:02.265074  540831 cri.go:89] found id: "e4828303cd2a90f2436dec99343b7ffa44a1eb586b82513fc0a7a01f1a37cd0d"
	I1027 19:37:02.265078  540831 cri.go:89] found id: ""
	I1027 19:37:02.265156  540831 ssh_runner.go:195] Run: sudo runc list -f json
	W1027 19:37:02.283542  540831 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:37:02Z" level=error msg="open /run/runc: no such file or directory"
	I1027 19:37:02.283633  540831 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 19:37:02.296916  540831 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1027 19:37:02.296939  540831 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1027 19:37:02.296989  540831 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 19:37:02.307819  540831 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 19:37:02.308602  540831 kubeconfig.go:125] found "pause-249140" server: "https://192.168.85.2:8443"
	I1027 19:37:02.309600  540831 kapi.go:59] client config for pause-249140: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21801-352833/.minikube/profiles/pause-249140/client.crt", KeyFile:"/home/jenkins/minikube-integration/21801-352833/.minikube/profiles/pause-249140/client.key", CAFile:"/home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c4e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1027 19:37:02.310322  540831 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1027 19:37:02.310346  540831 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1027 19:37:02.310354  540831 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1027 19:37:02.310360  540831 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1027 19:37:02.310380  540831 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1027 19:37:02.310968  540831 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 19:37:02.324729  540831 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1027 19:37:02.324774  540831 kubeadm.go:601] duration metric: took 27.828375ms to restartPrimaryControlPlane
	I1027 19:37:02.324789  540831 kubeadm.go:402] duration metric: took 110.94009ms to StartCluster
	I1027 19:37:02.324811  540831 settings.go:142] acquiring lock: {Name:mk8304c2106bf310642e0949fc0266ccb50f2f95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:37:02.324891  540831 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:37:02.334349  540831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/kubeconfig: {Name:mk24cbe512a6907c874f3fb7a85390a8f9fd2b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:37:02.334719  540831 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 19:37:02.334962  540831 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 19:37:02.335306  540831 config.go:182] Loaded profile config "pause-249140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:37:02.336699  540831 out.go:179] * Verifying Kubernetes components...
	I1027 19:37:02.336770  540831 out.go:179] * Enabled addons: 
	I1027 19:37:02.791769  534866 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1027 19:37:02.791831  534866 kubeadm.go:318] [preflight] Running pre-flight checks
	I1027 19:37:02.792005  534866 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1027 19:37:02.792100  534866 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1027 19:37:02.792240  534866 kubeadm.go:318] OS: Linux
	I1027 19:37:02.792324  534866 kubeadm.go:318] CGROUPS_CPU: enabled
	I1027 19:37:02.792418  534866 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1027 19:37:02.792496  534866 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1027 19:37:02.792561  534866 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1027 19:37:02.792633  534866 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1027 19:37:02.792719  534866 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1027 19:37:02.792791  534866 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1027 19:37:02.792853  534866 kubeadm.go:318] CGROUPS_IO: enabled
	I1027 19:37:02.793055  534866 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 19:37:02.793215  534866 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 19:37:02.793335  534866 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 19:37:02.793412  534866 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 19:37:02.795372  534866 out.go:252]   - Generating certificates and keys ...
	I1027 19:37:02.795506  534866 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1027 19:37:02.795589  534866 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1027 19:37:02.795703  534866 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 19:37:02.795782  534866 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1027 19:37:02.795877  534866 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1027 19:37:02.795948  534866 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1027 19:37:02.796033  534866 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1027 19:37:02.796220  534866 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-368442 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1027 19:37:02.796287  534866 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1027 19:37:02.796421  534866 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-368442 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1027 19:37:02.796503  534866 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 19:37:02.796564  534866 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 19:37:02.796621  534866 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1027 19:37:02.796686  534866 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 19:37:02.796741  534866 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 19:37:02.796811  534866 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 19:37:02.796880  534866 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 19:37:02.796959  534866 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 19:37:02.797000  534866 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 19:37:02.797061  534866 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 19:37:02.797126  534866 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 19:37:02.798837  534866 out.go:252]   - Booting up control plane ...
	I1027 19:37:02.798959  534866 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 19:37:02.799063  534866 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 19:37:02.799177  534866 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 19:37:02.799310  534866 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 19:37:02.799431  534866 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 19:37:02.799671  534866 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 19:37:02.799767  534866 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 19:37:02.799833  534866 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1027 19:37:02.800003  534866 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 19:37:02.800192  534866 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 19:37:02.800282  534866 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.002722151s
	I1027 19:37:02.800454  534866 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 19:37:02.800614  534866 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1027 19:37:02.800746  534866 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 19:37:02.800848  534866 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 19:37:02.800933  534866 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.771751748s
	I1027 19:37:02.801038  534866 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.622287173s
	I1027 19:37:02.801123  534866 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.004245349s
	I1027 19:37:02.801331  534866 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 19:37:02.801585  534866 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 19:37:02.801639  534866 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 19:37:02.801866  534866 kubeadm.go:318] [mark-control-plane] Marking the node cert-expiration-368442 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 19:37:02.801912  534866 kubeadm.go:318] [bootstrap-token] Using token: csf2z3.qsf9sz9ro4wba57t
	I1027 19:37:02.803734  534866 out.go:252]   - Configuring RBAC rules ...
	I1027 19:37:02.803964  534866 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 19:37:02.804086  534866 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 19:37:02.804312  534866 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 19:37:02.804495  534866 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 19:37:02.804693  534866 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 19:37:02.804764  534866 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 19:37:02.804914  534866 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 19:37:02.804958  534866 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1027 19:37:02.805011  534866 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1027 19:37:02.805015  534866 kubeadm.go:318] 
	I1027 19:37:02.805107  534866 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1027 19:37:02.805161  534866 kubeadm.go:318] 
	I1027 19:37:02.805308  534866 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1027 19:37:02.805313  534866 kubeadm.go:318] 
	I1027 19:37:02.805365  534866 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1027 19:37:02.805466  534866 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 19:37:02.805539  534866 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 19:37:02.805548  534866 kubeadm.go:318] 
	I1027 19:37:02.805615  534866 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1027 19:37:02.805625  534866 kubeadm.go:318] 
	I1027 19:37:02.805691  534866 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 19:37:02.805695  534866 kubeadm.go:318] 
	I1027 19:37:02.805766  534866 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1027 19:37:02.805885  534866 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 19:37:02.805981  534866 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 19:37:02.805991  534866 kubeadm.go:318] 
	I1027 19:37:02.806095  534866 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 19:37:02.806233  534866 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1027 19:37:02.806243  534866 kubeadm.go:318] 
	I1027 19:37:02.806456  534866 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token csf2z3.qsf9sz9ro4wba57t \
	I1027 19:37:02.806585  534866 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ab29e81999671591f366788f5ae9ffb132789ebc71f7c0efdaecd38575a5ab6a \
	I1027 19:37:02.806611  534866 kubeadm.go:318] 	--control-plane 
	I1027 19:37:02.806616  534866 kubeadm.go:318] 
	I1027 19:37:02.806763  534866 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1027 19:37:02.806769  534866 kubeadm.go:318] 
	I1027 19:37:02.806882  534866 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token csf2z3.qsf9sz9ro4wba57t \
	I1027 19:37:02.807056  534866 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ab29e81999671591f366788f5ae9ffb132789ebc71f7c0efdaecd38575a5ab6a 
	I1027 19:37:02.807065  534866 cni.go:84] Creating CNI manager for ""
	I1027 19:37:02.807091  534866 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:37:02.808966  534866 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1027 19:36:59.240397  536851 out.go:252]   - Booting up control plane ...
	I1027 19:36:59.240517  536851 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 19:36:59.240647  536851 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 19:36:59.241876  536851 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 19:36:59.262768  536851 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 19:36:59.262896  536851 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 19:36:59.273399  536851 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 19:36:59.273752  536851 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 19:36:59.273823  536851 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1027 19:36:59.412468  536851 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 19:36:59.412612  536851 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 19:37:00.416106  536851 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.00214982s
	I1027 19:37:00.421832  536851 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 19:37:00.421940  536851 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8555/livez
	I1027 19:37:00.422047  536851 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 19:37:00.422152  536851 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 19:37:02.156899  536851 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.735219419s
	I1027 19:37:02.810463  534866 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 19:37:02.818662  534866 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 19:37:02.818676  534866 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1027 19:37:02.841486  534866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 19:37:03.178316  534866 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 19:37:03.178424  534866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:37:03.178490  534866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes cert-expiration-368442 minikube.k8s.io/updated_at=2025_10_27T19_37_03_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f minikube.k8s.io/name=cert-expiration-368442 minikube.k8s.io/primary=true
	I1027 19:37:03.197474  534866 ops.go:34] apiserver oom_adj: -16
	I1027 19:37:03.347999  534866 kubeadm.go:1113] duration metric: took 169.651748ms to wait for elevateKubeSystemPrivileges
	I1027 19:37:03.348026  534866 kubeadm.go:402] duration metric: took 11.774226305s to StartCluster
	I1027 19:37:03.348047  534866 settings.go:142] acquiring lock: {Name:mk8304c2106bf310642e0949fc0266ccb50f2f95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:37:03.348157  534866 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:37:03.349661  534866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/kubeconfig: {Name:mk24cbe512a6907c874f3fb7a85390a8f9fd2b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:37:03.349941  534866 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 19:37:03.350115  534866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 19:37:03.350150  534866 config.go:182] Loaded profile config "cert-expiration-368442": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:37:03.350204  534866 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 19:37:03.350294  534866 addons.go:69] Setting storage-provisioner=true in profile "cert-expiration-368442"
	I1027 19:37:03.350328  534866 addons.go:238] Setting addon storage-provisioner=true in "cert-expiration-368442"
	I1027 19:37:03.350366  534866 host.go:66] Checking if "cert-expiration-368442" exists ...
	I1027 19:37:03.350452  534866 addons.go:69] Setting default-storageclass=true in profile "cert-expiration-368442"
	I1027 19:37:03.350476  534866 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-368442"
	I1027 19:37:03.350887  534866 cli_runner.go:164] Run: docker container inspect cert-expiration-368442 --format={{.State.Status}}
	I1027 19:37:03.351662  534866 cli_runner.go:164] Run: docker container inspect cert-expiration-368442 --format={{.State.Status}}
	I1027 19:37:03.351969  534866 out.go:179] * Verifying Kubernetes components...
	I1027 19:37:03.353648  534866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:37:03.396322  534866 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:37:03.397086  534866 addons.go:238] Setting addon default-storageclass=true in "cert-expiration-368442"
	I1027 19:37:03.397119  534866 host.go:66] Checking if "cert-expiration-368442" exists ...
	I1027 19:37:03.397641  534866 cli_runner.go:164] Run: docker container inspect cert-expiration-368442 --format={{.State.Status}}
	I1027 19:37:03.398290  534866 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:37:03.398312  534866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 19:37:03.398371  534866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-368442
	I1027 19:37:03.437812  534866 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 19:37:03.437829  534866 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 19:37:03.437901  534866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-368442
	I1027 19:37:03.447441  534866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33365 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/cert-expiration-368442/id_rsa Username:docker}
	I1027 19:37:03.475523  534866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33365 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/cert-expiration-368442/id_rsa Username:docker}
	I1027 19:37:03.523646  534866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 19:37:03.576292  534866 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:37:03.624996  534866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:37:03.643500  534866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 19:37:03.780051  534866 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1027 19:37:03.782822  534866 api_server.go:52] waiting for apiserver process to appear ...
	I1027 19:37:03.782880  534866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:37:04.054971  534866 api_server.go:72] duration metric: took 704.995718ms to wait for apiserver process to appear ...
	I1027 19:37:04.054993  534866 api_server.go:88] waiting for apiserver healthz status ...
	I1027 19:37:04.055026  534866 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 19:37:04.061346  534866 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1027 19:37:04.062504  534866 api_server.go:141] control plane version: v1.34.1
	I1027 19:37:04.062524  534866 api_server.go:131] duration metric: took 7.524705ms to wait for apiserver health ...
	I1027 19:37:04.062541  534866 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 19:37:04.065657  534866 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1027 19:37:02.338184  540831 addons.go:514] duration metric: took 3.269587ms for enable addons: enabled=[]
	I1027 19:37:02.338283  540831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:37:02.531543  540831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:37:02.551079  540831 node_ready.go:35] waiting up to 6m0s for node "pause-249140" to be "Ready" ...
	I1027 19:37:02.562122  540831 node_ready.go:49] node "pause-249140" is "Ready"
	I1027 19:37:02.562170  540831 node_ready.go:38] duration metric: took 11.008162ms for node "pause-249140" to be "Ready" ...
	I1027 19:37:02.562187  540831 api_server.go:52] waiting for apiserver process to appear ...
	I1027 19:37:02.562245  540831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:37:02.582645  540831 api_server.go:72] duration metric: took 247.857554ms to wait for apiserver process to appear ...
	I1027 19:37:02.582694  540831 api_server.go:88] waiting for apiserver healthz status ...
	I1027 19:37:02.582721  540831 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1027 19:37:02.589249  540831 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1027 19:37:02.590416  540831 api_server.go:141] control plane version: v1.34.1
	I1027 19:37:02.590449  540831 api_server.go:131] duration metric: took 7.746088ms to wait for apiserver health ...
	I1027 19:37:02.590461  540831 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 19:37:02.594394  540831 system_pods.go:59] 7 kube-system pods found
	I1027 19:37:02.594438  540831 system_pods.go:61] "coredns-66bc5c9577-zw67w" [0cb0e471-ebc9-46c0-b1fa-01239c268b53] Running
	I1027 19:37:02.594447  540831 system_pods.go:61] "etcd-pause-249140" [115e31fc-5812-4d70-8afa-93829d9571b8] Running
	I1027 19:37:02.594454  540831 system_pods.go:61] "kindnet-8df8g" [c60385cd-2c72-418e-b71f-de147e042619] Running
	I1027 19:37:02.594460  540831 system_pods.go:61] "kube-apiserver-pause-249140" [3fb5fb24-e2e4-4b84-a8c0-5a3132562289] Running
	I1027 19:37:02.594465  540831 system_pods.go:61] "kube-controller-manager-pause-249140" [e7aadb55-e49a-42d0-b3d8-46f7588c7dc2] Running
	I1027 19:37:02.594471  540831 system_pods.go:61] "kube-proxy-brj24" [921a42f0-4e87-4a36-8436-4716703e03d7] Running
	I1027 19:37:02.594477  540831 system_pods.go:61] "kube-scheduler-pause-249140" [e2971147-d4ea-47b1-abc1-30c84376bf08] Running
	I1027 19:37:02.594486  540831 system_pods.go:74] duration metric: took 4.016794ms to wait for pod list to return data ...
	I1027 19:37:02.594517  540831 default_sa.go:34] waiting for default service account to be created ...
	I1027 19:37:02.597015  540831 default_sa.go:45] found service account: "default"
	I1027 19:37:02.597050  540831 default_sa.go:55] duration metric: took 2.51699ms for default service account to be created ...
	I1027 19:37:02.597064  540831 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 19:37:02.600190  540831 system_pods.go:86] 7 kube-system pods found
	I1027 19:37:02.600222  540831 system_pods.go:89] "coredns-66bc5c9577-zw67w" [0cb0e471-ebc9-46c0-b1fa-01239c268b53] Running
	I1027 19:37:02.600230  540831 system_pods.go:89] "etcd-pause-249140" [115e31fc-5812-4d70-8afa-93829d9571b8] Running
	I1027 19:37:02.600236  540831 system_pods.go:89] "kindnet-8df8g" [c60385cd-2c72-418e-b71f-de147e042619] Running
	I1027 19:37:02.600242  540831 system_pods.go:89] "kube-apiserver-pause-249140" [3fb5fb24-e2e4-4b84-a8c0-5a3132562289] Running
	I1027 19:37:02.600248  540831 system_pods.go:89] "kube-controller-manager-pause-249140" [e7aadb55-e49a-42d0-b3d8-46f7588c7dc2] Running
	I1027 19:37:02.600254  540831 system_pods.go:89] "kube-proxy-brj24" [921a42f0-4e87-4a36-8436-4716703e03d7] Running
	I1027 19:37:02.600259  540831 system_pods.go:89] "kube-scheduler-pause-249140" [e2971147-d4ea-47b1-abc1-30c84376bf08] Running
	I1027 19:37:02.600270  540831 system_pods.go:126] duration metric: took 3.197331ms to wait for k8s-apps to be running ...
	I1027 19:37:02.600284  540831 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 19:37:02.600343  540831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:37:02.619675  540831 system_svc.go:56] duration metric: took 19.377729ms WaitForService to wait for kubelet
	I1027 19:37:02.619712  540831 kubeadm.go:586] duration metric: took 284.936146ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 19:37:02.619736  540831 node_conditions.go:102] verifying NodePressure condition ...
	I1027 19:37:02.625203  540831 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1027 19:37:02.625239  540831 node_conditions.go:123] node cpu capacity is 8
	I1027 19:37:02.625255  540831 node_conditions.go:105] duration metric: took 5.513759ms to run NodePressure ...
	I1027 19:37:02.625270  540831 start.go:241] waiting for startup goroutines ...
	I1027 19:37:02.625279  540831 start.go:246] waiting for cluster config update ...
	I1027 19:37:02.625287  540831 start.go:255] writing updated cluster config ...
	I1027 19:37:02.625693  540831 ssh_runner.go:195] Run: rm -f paused
	I1027 19:37:02.633087  540831 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 19:37:02.633779  540831 kapi.go:59] client config for pause-249140: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21801-352833/.minikube/profiles/pause-249140/client.crt", KeyFile:"/home/jenkins/minikube-integration/21801-352833/.minikube/profiles/pause-249140/client.key", CAFile:"/home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c4e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1027 19:37:02.638577  540831 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zw67w" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:37:02.643940  540831 pod_ready.go:94] pod "coredns-66bc5c9577-zw67w" is "Ready"
	I1027 19:37:02.643969  540831 pod_ready.go:86] duration metric: took 5.363692ms for pod "coredns-66bc5c9577-zw67w" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:37:02.646889  540831 pod_ready.go:83] waiting for pod "etcd-pause-249140" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:37:02.653031  540831 pod_ready.go:94] pod "etcd-pause-249140" is "Ready"
	I1027 19:37:02.653061  540831 pod_ready.go:86] duration metric: took 6.144214ms for pod "etcd-pause-249140" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:37:02.655720  540831 pod_ready.go:83] waiting for pod "kube-apiserver-pause-249140" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:37:02.661375  540831 pod_ready.go:94] pod "kube-apiserver-pause-249140" is "Ready"
	I1027 19:37:02.661413  540831 pod_ready.go:86] duration metric: took 5.659431ms for pod "kube-apiserver-pause-249140" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:37:02.665145  540831 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-249140" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:37:03.038985  540831 pod_ready.go:94] pod "kube-controller-manager-pause-249140" is "Ready"
	I1027 19:37:03.039022  540831 pod_ready.go:86] duration metric: took 373.851049ms for pod "kube-controller-manager-pause-249140" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:37:03.239704  540831 pod_ready.go:83] waiting for pod "kube-proxy-brj24" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:37:03.639995  540831 pod_ready.go:94] pod "kube-proxy-brj24" is "Ready"
	I1027 19:37:03.640094  540831 pod_ready.go:86] duration metric: took 400.294106ms for pod "kube-proxy-brj24" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:37:03.838037  540831 pod_ready.go:83] waiting for pod "kube-scheduler-pause-249140" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:37:04.238305  540831 pod_ready.go:94] pod "kube-scheduler-pause-249140" is "Ready"
	I1027 19:37:04.238345  540831 pod_ready.go:86] duration metric: took 400.278768ms for pod "kube-scheduler-pause-249140" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:37:04.238362  540831 pod_ready.go:40] duration metric: took 1.605228732s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 19:37:04.304653  540831 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 19:37:04.307108  540831 out.go:179] * Done! kubectl is now configured to use "pause-249140" cluster and "default" namespace by default
	I1027 19:37:04.067248  534866 addons.go:514] duration metric: took 717.03244ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1027 19:37:04.068601  534866 system_pods.go:59] 5 kube-system pods found
	I1027 19:37:04.068629  534866 system_pods.go:61] "etcd-cert-expiration-368442" [6db4177d-1cb1-44fc-96b3-1d498aa77503] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 19:37:04.068640  534866 system_pods.go:61] "kube-apiserver-cert-expiration-368442" [c62126bd-e920-44f9-a512-789de92f95af] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 19:37:04.068650  534866 system_pods.go:61] "kube-controller-manager-cert-expiration-368442" [cd1242be-b425-42c3-b5a5-4fe1b8469aaf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 19:37:04.068658  534866 system_pods.go:61] "kube-scheduler-cert-expiration-368442" [08daea0d-d7c4-408c-b114-679e832fd107] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 19:37:04.068664  534866 system_pods.go:61] "storage-provisioner" [4bdd52aa-0af6-40dd-a74e-e4a6ea24801f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1027 19:37:04.068673  534866 system_pods.go:74] duration metric: took 6.097178ms to wait for pod list to return data ...
	I1027 19:37:04.068687  534866 kubeadm.go:586] duration metric: took 718.716671ms to wait for: map[apiserver:true system_pods:true]
	I1027 19:37:04.068700  534866 node_conditions.go:102] verifying NodePressure condition ...
	I1027 19:37:04.072470  534866 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1027 19:37:04.072492  534866 node_conditions.go:123] node cpu capacity is 8
	I1027 19:37:04.072526  534866 node_conditions.go:105] duration metric: took 3.821008ms to run NodePressure ...
	I1027 19:37:04.072542  534866 start.go:241] waiting for startup goroutines ...
	I1027 19:37:04.286015  534866 kapi.go:214] "coredns" deployment in "kube-system" namespace and "cert-expiration-368442" context rescaled to 1 replicas
	I1027 19:37:04.286048  534866 start.go:246] waiting for cluster config update ...
	I1027 19:37:04.286063  534866 start.go:255] writing updated cluster config ...
	I1027 19:37:04.286418  534866 ssh_runner.go:195] Run: rm -f paused
	I1027 19:37:04.362657  534866 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 19:37:04.364484  534866 out.go:179] * Done! kubectl is now configured to use "cert-expiration-368442" cluster and "default" namespace by default
	I1027 19:37:03.319408  536851 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.896641994s
	I1027 19:37:05.424504  536851 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.002976559s
	I1027 19:37:05.437300  536851 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 19:37:05.450198  536851 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 19:37:05.462679  536851 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 19:37:05.462995  536851 kubeadm.go:318] [mark-control-plane] Marking the node cert-options-638768 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 19:37:05.474196  536851 kubeadm.go:318] [bootstrap-token] Using token: pwnbmb.s8kqfw038b1ym7jv
	I1027 19:37:00.728193  541957 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 ...
	I1027 19:37:00.781720  541957 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:6681d82b7b719ef3324102b709ec62eb -> /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1027 19:37:03.422262  541957 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 ...
	I1027 19:37:03.422554  541957 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 ...
	I1027 19:37:04.510087  541957 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1027 19:37:04.510363  541957 profile.go:148] Saving config to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/missing-upgrade-345161/config.json ...
	I1027 19:37:04.510406  541957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/missing-upgrade-345161/config.json: {Name:mkea8a6de8705a0768ede63f0a3af506fb5e41bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:37:04.792874  541957 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 as a tarball
	I1027 19:37:04.792893  541957 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 from local cache
	I1027 19:37:05.476097  536851 out.go:252]   - Configuring RBAC rules ...
	I1027 19:37:05.476326  536851 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 19:37:05.480513  536851 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 19:37:05.488302  536851 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 19:37:05.492390  536851 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 19:37:05.495924  536851 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 19:37:05.499576  536851 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 19:37:05.831700  536851 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 19:37:06.251283  536851 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1027 19:37:06.832203  536851 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1027 19:37:06.833124  536851 kubeadm.go:318] 
	I1027 19:37:06.833230  536851 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1027 19:37:06.833235  536851 kubeadm.go:318] 
	I1027 19:37:06.833334  536851 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1027 19:37:06.833367  536851 kubeadm.go:318] 
	I1027 19:37:06.833406  536851 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1027 19:37:06.833495  536851 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 19:37:06.833592  536851 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 19:37:06.833603  536851 kubeadm.go:318] 
	I1027 19:37:06.833674  536851 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1027 19:37:06.833678  536851 kubeadm.go:318] 
	I1027 19:37:06.833747  536851 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 19:37:06.833752  536851 kubeadm.go:318] 
	I1027 19:37:06.833822  536851 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1027 19:37:06.833938  536851 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 19:37:06.834028  536851 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 19:37:06.834033  536851 kubeadm.go:318] 
	I1027 19:37:06.834170  536851 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 19:37:06.834276  536851 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1027 19:37:06.834280  536851 kubeadm.go:318] 
	I1027 19:37:06.834358  536851 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8555 --token pwnbmb.s8kqfw038b1ym7jv \
	I1027 19:37:06.834443  536851 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ab29e81999671591f366788f5ae9ffb132789ebc71f7c0efdaecd38575a5ab6a \
	I1027 19:37:06.834470  536851 kubeadm.go:318] 	--control-plane 
	I1027 19:37:06.834473  536851 kubeadm.go:318] 
	I1027 19:37:06.834548  536851 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1027 19:37:06.834551  536851 kubeadm.go:318] 
	I1027 19:37:06.834618  536851 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8555 --token pwnbmb.s8kqfw038b1ym7jv \
	I1027 19:37:06.834703  536851 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ab29e81999671591f366788f5ae9ffb132789ebc71f7c0efdaecd38575a5ab6a 
	I1027 19:37:06.838531  536851 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1027 19:37:06.838667  536851 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 19:37:06.838873  536851 cni.go:84] Creating CNI manager for ""
	I1027 19:37:06.838883  536851 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:37:06.840916  536851 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.471800083Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.472866104Z" level=info msg="Conmon does support the --sync option"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.472893254Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.472916005Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.473793653Z" level=info msg="Conmon does support the --sync option"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.47381734Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.478584268Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.478625319Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.479439918Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.48004216Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.480412608Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.488226742Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.543951994Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-zw67w Namespace:kube-system ID:4720fc3e929dbb3031684a17cca7299c28e24c6e3a8b181e2e9f6a6233a24898 UID:0cb0e471-ebc9-46c0-b1fa-01239c268b53 NetNS:/var/run/netns/cd3fa7f4-947e-4ff8-9d5f-d09f18d4d27f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000508260}] Aliases:map[]}"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.544222707Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-zw67w for CNI network kindnet (type=ptp)"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.54486064Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.544895918Z" level=info msg="Starting seccomp notifier watcher"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.544969497Z" level=info msg="Create NRI interface"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.545086991Z" level=info msg="built-in NRI default validator is disabled"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.545105568Z" level=info msg="runtime interface created"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.545119087Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.545126958Z" level=info msg="runtime interface starting up..."
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.545157387Z" level=info msg="starting plugins..."
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.545176757Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.545664442Z" level=info msg="No systemd watchdog enabled"
	Oct 27 19:37:00 pause-249140 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	a5d43c06cdefd       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   13 seconds ago      Running             coredns                   0                   4720fc3e929db       coredns-66bc5c9577-zw67w               kube-system
	23504db13cbd1       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   24 seconds ago      Running             kube-proxy                0                   e067b230cbfc4       kube-proxy-brj24                       kube-system
	01bf760b3e7b2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   24 seconds ago      Running             kindnet-cni               0                   f9f82ba4d47d0       kindnet-8df8g                          kube-system
	d69010095e3eb       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   37 seconds ago      Running             kube-apiserver            0                   a30dd977a26cf       kube-apiserver-pause-249140            kube-system
	c5b2eb2a54f88       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   37 seconds ago      Running             kube-controller-manager   0                   db877865abb6e       kube-controller-manager-pause-249140   kube-system
	01712f1073762       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   37 seconds ago      Running             etcd                      0                   47807b3dfccd6       etcd-pause-249140                      kube-system
	e4828303cd2a9       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   37 seconds ago      Running             kube-scheduler            0                   26f4821c6f0ed       kube-scheduler-pause-249140            kube-system
	
	
	==> coredns [a5d43c06cdefd0fd790cb0418ec7193d78de34b9aa196d7434e89fa6e058a9e2] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39177 - 32461 "HINFO IN 1021597915146797741.8892895048223440577. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.475172268s
	
	
	==> describe nodes <==
	Name:               pause-249140
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-249140
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=pause-249140
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T19_36_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 19:36:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-249140
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 19:36:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 19:36:53 +0000   Mon, 27 Oct 2025 19:36:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 19:36:53 +0000   Mon, 27 Oct 2025 19:36:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 19:36:53 +0000   Mon, 27 Oct 2025 19:36:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 19:36:53 +0000   Mon, 27 Oct 2025 19:36:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-249140
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                f8695a9d-745c-4821-92ff-cf4a719b1310
	  Boot ID:                    811bd29c-e64e-4acc-9427-bab1f7caed93
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-zw67w                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-pause-249140                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-8df8g                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-pause-249140             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-pause-249140    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-brj24                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-pause-249140             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24s                kube-proxy       
	  Normal  NodeHasSufficientMemory  37s (x8 over 37s)  kubelet          Node pause-249140 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s (x8 over 37s)  kubelet          Node pause-249140 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s (x8 over 37s)  kubelet          Node pause-249140 status is now: NodeHasSufficientPID
	  Normal  Starting                 31s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s                kubelet          Node pause-249140 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s                kubelet          Node pause-249140 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s                kubelet          Node pause-249140 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s                node-controller  Node pause-249140 event: Registered Node pause-249140 in Controller
	  Normal  NodeReady                14s                kubelet          Node pause-249140 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 23 52 43 9a ba 08 06
	[  +0.000398] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 50 95 0e df 53 08 06
	[Oct27 18:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.017295] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023893] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023882] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023851] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +2.047849] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +4.031592] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +8.319143] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[ +16.382183] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[Oct27 19:00] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	
	
	==> etcd [01712f1073762c52020031153783123eaffdca1ca62a7f9798f8eee04cb57fd9] <==
	{"level":"info","ts":"2025-10-27T19:36:42.332181Z","caller":"traceutil/trace.go:172","msg":"trace[106228640] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kindnet; range_end:; response_count:1; response_revision:348; }","duration":"175.806535ms","start":"2025-10-27T19:36:42.156362Z","end":"2025-10-27T19:36:42.332169Z","steps":["trace[106228640] 'agreement among raft nodes before linearized reading'  (duration: 175.591423ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T19:36:42.332248Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"240.014754ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" limit:1 ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2025-10-27T19:36:42.332293Z","caller":"traceutil/trace.go:172","msg":"trace[54705406] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpointslice-controller; range_end:; response_count:1; response_revision:348; }","duration":"240.069912ms","start":"2025-10-27T19:36:42.092212Z","end":"2025-10-27T19:36:42.332282Z","steps":["trace[54705406] 'agreement among raft nodes before linearized reading'  (duration: 239.912915ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T19:36:42.332446Z","caller":"traceutil/trace.go:172","msg":"trace[634985258] transaction","detail":"{read_only:false; response_revision:352; number_of_response:1; }","duration":"272.873777ms","start":"2025-10-27T19:36:42.059560Z","end":"2025-10-27T19:36:42.332434Z","steps":["trace[634985258] 'process raft request'  (duration: 272.817808ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T19:36:42.332516Z","caller":"traceutil/trace.go:172","msg":"trace[1197883062] transaction","detail":"{read_only:false; response_revision:349; number_of_response:1; }","duration":"273.645267ms","start":"2025-10-27T19:36:42.058858Z","end":"2025-10-27T19:36:42.332503Z","steps":["trace[1197883062] 'process raft request'  (duration: 273.307157ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T19:36:42.332596Z","caller":"traceutil/trace.go:172","msg":"trace[1198813220] transaction","detail":"{read_only:false; response_revision:350; number_of_response:1; }","duration":"273.153879ms","start":"2025-10-27T19:36:42.059431Z","end":"2025-10-27T19:36:42.332585Z","steps":["trace[1198813220] 'process raft request'  (duration: 272.827679ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T19:36:42.332667Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"189.9559ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/clusterrole-aggregation-controller\" limit:1 ","response":"range_response_count:1 size:234"}
	{"level":"info","ts":"2025-10-27T19:36:42.332697Z","caller":"traceutil/trace.go:172","msg":"trace[546690537] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/clusterrole-aggregation-controller; range_end:; response_count:1; response_revision:351; }","duration":"189.990163ms","start":"2025-10-27T19:36:42.142696Z","end":"2025-10-27T19:36:42.332686Z","steps":["trace[546690537] 'agreement among raft nodes before linearized reading'  (duration: 189.678181ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T19:36:42.332843Z","caller":"traceutil/trace.go:172","msg":"trace[935065149] transaction","detail":"{read_only:false; response_revision:353; number_of_response:1; }","duration":"272.430122ms","start":"2025-10-27T19:36:42.060402Z","end":"2025-10-27T19:36:42.332832Z","steps":["trace[935065149] 'process raft request'  (duration: 272.010212ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T19:36:42.332889Z","caller":"traceutil/trace.go:172","msg":"trace[1698041888] transaction","detail":"{read_only:false; response_revision:351; number_of_response:1; }","duration":"273.346948ms","start":"2025-10-27T19:36:42.059532Z","end":"2025-10-27T19:36:42.332879Z","steps":["trace[1698041888] 'process raft request'  (duration: 272.778096ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T19:36:42.333084Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"273.007923ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kindnet\" limit:1 ","response":"range_response_count:1 size:4681"}
	{"level":"info","ts":"2025-10-27T19:36:42.333832Z","caller":"traceutil/trace.go:172","msg":"trace[1847432613] range","detail":"{range_begin:/registry/daemonsets/kube-system/kindnet; range_end:; response_count:1; response_revision:353; }","duration":"273.758168ms","start":"2025-10-27T19:36:42.060061Z","end":"2025-10-27T19:36:42.333819Z","steps":["trace[1847432613] 'agreement among raft nodes before linearized reading'  (duration: 272.944985ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T19:36:47.322837Z","caller":"traceutil/trace.go:172","msg":"trace[1298614446] transaction","detail":"{read_only:false; response_revision:414; number_of_response:1; }","duration":"136.562841ms","start":"2025-10-27T19:36:47.186247Z","end":"2025-10-27T19:36:47.322810Z","steps":["trace[1298614446] 'process raft request'  (duration: 92.920011ms)","trace[1298614446] 'compare'  (duration: 43.506639ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-27T19:36:47.592030Z","caller":"traceutil/trace.go:172","msg":"trace[1418362237] linearizableReadLoop","detail":"{readStateIndex:429; appliedIndex:429; }","duration":"212.342136ms","start":"2025-10-27T19:36:47.379660Z","end":"2025-10-27T19:36:47.592002Z","steps":["trace[1418362237] 'read index received'  (duration: 212.333094ms)","trace[1418362237] 'applied index is now lower than readState.Index'  (duration: 7.408µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T19:36:47.723522Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"343.834463ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T19:36:47.723591Z","caller":"traceutil/trace.go:172","msg":"trace[1991769037] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:414; }","duration":"343.922984ms","start":"2025-10-27T19:36:47.379653Z","end":"2025-10-27T19:36:47.723576Z","steps":["trace[1991769037] 'agreement among raft nodes before linearized reading'  (duration: 212.441254ms)","trace[1991769037] 'range keys from in-memory index tree'  (duration: 131.360383ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T19:36:47.724005Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.576021ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722596663596494565 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-249140\" mod_revision:414 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-249140\" value_size:4706 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-249140\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-27T19:36:47.724094Z","caller":"traceutil/trace.go:172","msg":"trace[270291271] transaction","detail":"{read_only:false; response_revision:415; number_of_response:1; }","duration":"392.527109ms","start":"2025-10-27T19:36:47.331552Z","end":"2025-10-27T19:36:47.724080Z","steps":["trace[270291271] 'process raft request'  (duration: 260.542241ms)","trace[270291271] 'compare'  (duration: 131.47758ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T19:36:47.724199Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-27T19:36:47.331528Z","time spent":"392.593578ms","remote":"127.0.0.1:38094","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4768,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-249140\" mod_revision:414 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-249140\" value_size:4706 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-249140\" > >"}
	{"level":"warn","ts":"2025-10-27T19:36:47.947822Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"117.943632ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-249140\" limit:1 ","response":"range_response_count:1 size:5582"}
	{"level":"info","ts":"2025-10-27T19:36:47.947900Z","caller":"traceutil/trace.go:172","msg":"trace[1277933230] range","detail":"{range_begin:/registry/minions/pause-249140; range_end:; response_count:1; response_revision:415; }","duration":"118.032403ms","start":"2025-10-27T19:36:47.829847Z","end":"2025-10-27T19:36:47.947880Z","steps":["trace[1277933230] 'range keys from in-memory index tree'  (duration: 117.779442ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T19:36:48.501694Z","caller":"traceutil/trace.go:172","msg":"trace[605054815] linearizableReadLoop","detail":"{readStateIndex:431; appliedIndex:431; }","duration":"121.866416ms","start":"2025-10-27T19:36:48.379798Z","end":"2025-10-27T19:36:48.501664Z","steps":["trace[605054815] 'read index received'  (duration: 121.854546ms)","trace[605054815] 'applied index is now lower than readState.Index'  (duration: 10.199µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T19:36:48.501828Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.00239ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T19:36:48.502253Z","caller":"traceutil/trace.go:172","msg":"trace[1292918405] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:416; }","duration":"122.439192ms","start":"2025-10-27T19:36:48.379792Z","end":"2025-10-27T19:36:48.502231Z","steps":["trace[1292918405] 'agreement among raft nodes before linearized reading'  (duration: 121.961713ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T19:36:48.502563Z","caller":"traceutil/trace.go:172","msg":"trace[97922063] transaction","detail":"{read_only:false; response_revision:417; number_of_response:1; }","duration":"148.011545ms","start":"2025-10-27T19:36:48.354509Z","end":"2025-10-27T19:36:48.502521Z","steps":["trace[97922063] 'process raft request'  (duration: 147.212732ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:37:07 up  2:19,  0 user,  load average: 5.40, 2.17, 1.38
	Linux pause-249140 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [01bf760b3e7b21a98d5df158a80b1c0b879013421d7c5e47ff7903915caf96a9] <==
	I1027 19:36:42.911807       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 19:36:42.912109       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1027 19:36:43.003297       1 main.go:148] setting mtu 1500 for CNI 
	I1027 19:36:43.003334       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 19:36:43.003361       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T19:36:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 19:36:43.211657       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 19:36:43.211708       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 19:36:43.211722       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 19:36:43.211876       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1027 19:36:43.511827       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 19:36:43.511993       1 metrics.go:72] Registering metrics
	I1027 19:36:43.512165       1 controller.go:711] "Syncing nftables rules"
	I1027 19:36:53.213841       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 19:36:53.213938       1 main.go:301] handling current node
	I1027 19:37:03.217870       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 19:37:03.217920       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d69010095e3eba77e809b777fa9e622cf5c9528a2eab5611100fa5eed6283461] <==
	I1027 19:36:34.120025       1 policy_source.go:240] refreshing policies
	E1027 19:36:34.143980       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1027 19:36:34.189686       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 19:36:34.195587       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 19:36:34.196458       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1027 19:36:34.206847       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 19:36:34.207413       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 19:36:34.282785       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 19:36:34.993182       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1027 19:36:34.998286       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1027 19:36:34.998310       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 19:36:35.633701       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 19:36:35.722973       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 19:36:35.898971       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1027 19:36:35.907112       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1027 19:36:35.908672       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 19:36:35.915678       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 19:36:36.046772       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 19:36:36.740740       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 19:36:36.755099       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1027 19:36:36.765000       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1027 19:36:41.550665       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 19:36:41.569428       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1027 19:36:41.630810       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 19:36:42.058708       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [c5b2eb2a54f889f17b3db8afb09c190f60784cb1f08c460017039d3d947aeaaf] <==
	I1027 19:36:41.444563       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 19:36:41.444576       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 19:36:41.444586       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1027 19:36:41.444982       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1027 19:36:41.445003       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 19:36:41.445110       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 19:36:41.445238       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-249140"
	I1027 19:36:41.445302       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1027 19:36:41.445341       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1027 19:36:41.445499       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 19:36:41.446083       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1027 19:36:41.446264       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 19:36:41.446584       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 19:36:41.448439       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 19:36:41.450735       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 19:36:41.453063       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 19:36:41.455632       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 19:36:41.457218       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 19:36:41.458451       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1027 19:36:41.458565       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1027 19:36:41.463751       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 19:36:41.464967       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1027 19:36:41.468348       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 19:36:41.567257       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-249140" podCIDRs=["10.244.0.0/24"]
	I1027 19:36:56.446880       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [23504db13cbd1fd12a985de0d72ca202ac317afa3c2b2e13010bc502e000e818] <==
	I1027 19:36:42.787018       1 server_linux.go:53] "Using iptables proxy"
	I1027 19:36:42.851653       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 19:36:42.952324       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 19:36:42.952377       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1027 19:36:42.952536       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 19:36:42.975884       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 19:36:42.975945       1 server_linux.go:132] "Using iptables Proxier"
	I1027 19:36:42.984014       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 19:36:42.984739       1 server.go:527] "Version info" version="v1.34.1"
	I1027 19:36:42.984768       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:36:42.988027       1 config.go:200] "Starting service config controller"
	I1027 19:36:42.988057       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 19:36:42.988248       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 19:36:42.988261       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 19:36:42.988311       1 config.go:106] "Starting endpoint slice config controller"
	I1027 19:36:42.988319       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 19:36:42.989012       1 config.go:309] "Starting node config controller"
	I1027 19:36:42.989035       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 19:36:43.089011       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 19:36:43.089166       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 19:36:43.089179       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 19:36:43.089174       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [e4828303cd2a90f2436dec99343b7ffa44a1eb586b82513fc0a7a01f1a37cd0d] <==
	E1027 19:36:34.046875       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 19:36:34.047178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 19:36:34.047438       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 19:36:34.047455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 19:36:34.047609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 19:36:34.047218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 19:36:34.047797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1027 19:36:34.048172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 19:36:34.048765       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 19:36:34.048794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 19:36:34.915918       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 19:36:34.959961       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1027 19:36:34.994414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 19:36:35.002648       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 19:36:35.052729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 19:36:35.135439       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 19:36:35.163931       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 19:36:35.180758       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 19:36:35.181158       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 19:36:35.184734       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 19:36:35.192597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 19:36:35.230430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 19:36:35.235610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 19:36:35.481930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1027 19:36:37.443279       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 19:36:37 pause-249140 kubelet[1305]: E1027 19:36:37.713594    1305 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-249140\" already exists" pod="kube-system/kube-apiserver-pause-249140"
	Oct 27 19:36:37 pause-249140 kubelet[1305]: I1027 19:36:37.751875    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-249140" podStartSLOduration=1.751846647 podStartE2EDuration="1.751846647s" podCreationTimestamp="2025-10-27 19:36:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:36:37.736069803 +0000 UTC m=+1.207249199" watchObservedRunningTime="2025-10-27 19:36:37.751846647 +0000 UTC m=+1.223026043"
	Oct 27 19:36:37 pause-249140 kubelet[1305]: I1027 19:36:37.768714    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-249140" podStartSLOduration=1.768689063 podStartE2EDuration="1.768689063s" podCreationTimestamp="2025-10-27 19:36:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:36:37.752485093 +0000 UTC m=+1.223664505" watchObservedRunningTime="2025-10-27 19:36:37.768689063 +0000 UTC m=+1.239868458"
	Oct 27 19:36:37 pause-249140 kubelet[1305]: I1027 19:36:37.787865    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-249140" podStartSLOduration=2.787841485 podStartE2EDuration="2.787841485s" podCreationTimestamp="2025-10-27 19:36:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:36:37.76923326 +0000 UTC m=+1.240412656" watchObservedRunningTime="2025-10-27 19:36:37.787841485 +0000 UTC m=+1.259020882"
	Oct 27 19:36:37 pause-249140 kubelet[1305]: I1027 19:36:37.808662    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-249140" podStartSLOduration=1.808636937 podStartE2EDuration="1.808636937s" podCreationTimestamp="2025-10-27 19:36:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:36:37.788831215 +0000 UTC m=+1.260010612" watchObservedRunningTime="2025-10-27 19:36:37.808636937 +0000 UTC m=+1.279816332"
	Oct 27 19:36:41 pause-249140 kubelet[1305]: I1027 19:36:41.635229    1305 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 27 19:36:41 pause-249140 kubelet[1305]: I1027 19:36:41.636124    1305 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 27 19:36:42 pause-249140 kubelet[1305]: I1027 19:36:42.053087    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c60385cd-2c72-418e-b71f-de147e042619-cni-cfg\") pod \"kindnet-8df8g\" (UID: \"c60385cd-2c72-418e-b71f-de147e042619\") " pod="kube-system/kindnet-8df8g"
	Oct 27 19:36:42 pause-249140 kubelet[1305]: I1027 19:36:42.053155    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbfcp\" (UniqueName: \"kubernetes.io/projected/c60385cd-2c72-418e-b71f-de147e042619-kube-api-access-vbfcp\") pod \"kindnet-8df8g\" (UID: \"c60385cd-2c72-418e-b71f-de147e042619\") " pod="kube-system/kindnet-8df8g"
	Oct 27 19:36:42 pause-249140 kubelet[1305]: I1027 19:36:42.053192    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c60385cd-2c72-418e-b71f-de147e042619-lib-modules\") pod \"kindnet-8df8g\" (UID: \"c60385cd-2c72-418e-b71f-de147e042619\") " pod="kube-system/kindnet-8df8g"
	Oct 27 19:36:42 pause-249140 kubelet[1305]: I1027 19:36:42.053220    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c60385cd-2c72-418e-b71f-de147e042619-xtables-lock\") pod \"kindnet-8df8g\" (UID: \"c60385cd-2c72-418e-b71f-de147e042619\") " pod="kube-system/kindnet-8df8g"
	Oct 27 19:36:42 pause-249140 kubelet[1305]: I1027 19:36:42.154040    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/921a42f0-4e87-4a36-8436-4716703e03d7-lib-modules\") pod \"kube-proxy-brj24\" (UID: \"921a42f0-4e87-4a36-8436-4716703e03d7\") " pod="kube-system/kube-proxy-brj24"
	Oct 27 19:36:42 pause-249140 kubelet[1305]: I1027 19:36:42.154087    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/921a42f0-4e87-4a36-8436-4716703e03d7-xtables-lock\") pod \"kube-proxy-brj24\" (UID: \"921a42f0-4e87-4a36-8436-4716703e03d7\") " pod="kube-system/kube-proxy-brj24"
	Oct 27 19:36:42 pause-249140 kubelet[1305]: I1027 19:36:42.154106    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rbc4\" (UniqueName: \"kubernetes.io/projected/921a42f0-4e87-4a36-8436-4716703e03d7-kube-api-access-2rbc4\") pod \"kube-proxy-brj24\" (UID: \"921a42f0-4e87-4a36-8436-4716703e03d7\") " pod="kube-system/kube-proxy-brj24"
	Oct 27 19:36:42 pause-249140 kubelet[1305]: I1027 19:36:42.154335    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/921a42f0-4e87-4a36-8436-4716703e03d7-kube-proxy\") pod \"kube-proxy-brj24\" (UID: \"921a42f0-4e87-4a36-8436-4716703e03d7\") " pod="kube-system/kube-proxy-brj24"
	Oct 27 19:36:43 pause-249140 kubelet[1305]: I1027 19:36:43.753349    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-brj24" podStartSLOduration=2.753326209 podStartE2EDuration="2.753326209s" podCreationTimestamp="2025-10-27 19:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:36:43.753280019 +0000 UTC m=+7.224459416" watchObservedRunningTime="2025-10-27 19:36:43.753326209 +0000 UTC m=+7.224505605"
	Oct 27 19:36:43 pause-249140 kubelet[1305]: I1027 19:36:43.753747    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-8df8g" podStartSLOduration=2.753726037 podStartE2EDuration="2.753726037s" podCreationTimestamp="2025-10-27 19:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:36:43.741769809 +0000 UTC m=+7.212949206" watchObservedRunningTime="2025-10-27 19:36:43.753726037 +0000 UTC m=+7.224905434"
	Oct 27 19:36:53 pause-249140 kubelet[1305]: I1027 19:36:53.419738    1305 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 27 19:36:53 pause-249140 kubelet[1305]: I1027 19:36:53.535387    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0cb0e471-ebc9-46c0-b1fa-01239c268b53-config-volume\") pod \"coredns-66bc5c9577-zw67w\" (UID: \"0cb0e471-ebc9-46c0-b1fa-01239c268b53\") " pod="kube-system/coredns-66bc5c9577-zw67w"
	Oct 27 19:36:53 pause-249140 kubelet[1305]: I1027 19:36:53.535446    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgj48\" (UniqueName: \"kubernetes.io/projected/0cb0e471-ebc9-46c0-b1fa-01239c268b53-kube-api-access-dgj48\") pod \"coredns-66bc5c9577-zw67w\" (UID: \"0cb0e471-ebc9-46c0-b1fa-01239c268b53\") " pod="kube-system/coredns-66bc5c9577-zw67w"
	Oct 27 19:36:54 pause-249140 kubelet[1305]: I1027 19:36:54.765851    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zw67w" podStartSLOduration=12.765830866 podStartE2EDuration="12.765830866s" podCreationTimestamp="2025-10-27 19:36:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:36:54.765498309 +0000 UTC m=+18.236677708" watchObservedRunningTime="2025-10-27 19:36:54.765830866 +0000 UTC m=+18.237010273"
	Oct 27 19:37:04 pause-249140 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 19:37:04 pause-249140 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 19:37:04 pause-249140 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 27 19:37:04 pause-249140 systemd[1]: kubelet.service: Consumed 1.389s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-249140 -n pause-249140
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-249140 -n pause-249140: exit status 2 (440.202764ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-249140 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-249140
helpers_test.go:243: (dbg) docker inspect pause-249140:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6efc7283d4ecb5ee4fad19e014e8a76a8c44fbbe811100a649401833144cfab8",
	        "Created": "2025-10-27T19:36:14.956368426Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 527537,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T19:36:15.01895205Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/6efc7283d4ecb5ee4fad19e014e8a76a8c44fbbe811100a649401833144cfab8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6efc7283d4ecb5ee4fad19e014e8a76a8c44fbbe811100a649401833144cfab8/hostname",
	        "HostsPath": "/var/lib/docker/containers/6efc7283d4ecb5ee4fad19e014e8a76a8c44fbbe811100a649401833144cfab8/hosts",
	        "LogPath": "/var/lib/docker/containers/6efc7283d4ecb5ee4fad19e014e8a76a8c44fbbe811100a649401833144cfab8/6efc7283d4ecb5ee4fad19e014e8a76a8c44fbbe811100a649401833144cfab8-json.log",
	        "Name": "/pause-249140",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "pause-249140:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-249140",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6efc7283d4ecb5ee4fad19e014e8a76a8c44fbbe811100a649401833144cfab8",
	                "LowerDir": "/var/lib/docker/overlay2/49bf847aabcc6b5107a816bebafcb4ca855291acf255a2bb30c0ce7ed8e23ddb-init/diff:/var/lib/docker/overlay2/71b61ec94610a35f2d924dec358052d4c154c36b3fe219802f60246ca2dc7f45/diff",
	                "MergedDir": "/var/lib/docker/overlay2/49bf847aabcc6b5107a816bebafcb4ca855291acf255a2bb30c0ce7ed8e23ddb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/49bf847aabcc6b5107a816bebafcb4ca855291acf255a2bb30c0ce7ed8e23ddb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/49bf847aabcc6b5107a816bebafcb4ca855291acf255a2bb30c0ce7ed8e23ddb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "pause-249140",
	                "Source": "/var/lib/docker/volumes/pause-249140/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-249140",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-249140",
	                "name.minikube.sigs.k8s.io": "pause-249140",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "136d01220ee58f0bd343a3d7364eafd259f5bdcfcd6c77ef3dcfa9d2029195f7",
	            "SandboxKey": "/var/run/docker/netns/136d01220ee5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33360"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33361"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33364"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33362"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33363"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-249140": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:d8:28:05:bc:3e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "eded7341b4a41d9b7051989e017c507ac38ddb8f71aeab44145a72dd52b221a7",
	                    "EndpointID": "c1cee8e61bde4e11e9f4a417f4ff324b3bf5a909d547eebb458047921b8ae57d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-249140",
	                        "6efc7283d4ec"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-249140 -n pause-249140
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-249140 -n pause-249140: exit status 2 (416.521554ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-249140 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-249140 logs -n 25: (1.274238187s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                           ARGS                                                                                                            │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-706609 --schedule 15s                                                                                                                                                                                   │ scheduled-stop-706609       │ jenkins │ v1.37.0 │ 27 Oct 25 19:34 UTC │                     │
	│ stop    │ -p scheduled-stop-706609 --schedule 15s                                                                                                                                                                                   │ scheduled-stop-706609       │ jenkins │ v1.37.0 │ 27 Oct 25 19:34 UTC │                     │
	│ stop    │ -p scheduled-stop-706609 --cancel-scheduled                                                                                                                                                                               │ scheduled-stop-706609       │ jenkins │ v1.37.0 │ 27 Oct 25 19:34 UTC │ 27 Oct 25 19:34 UTC │
	│ stop    │ -p scheduled-stop-706609 --schedule 15s                                                                                                                                                                                   │ scheduled-stop-706609       │ jenkins │ v1.37.0 │ 27 Oct 25 19:35 UTC │                     │
	│ stop    │ -p scheduled-stop-706609 --schedule 15s                                                                                                                                                                                   │ scheduled-stop-706609       │ jenkins │ v1.37.0 │ 27 Oct 25 19:35 UTC │                     │
	│ stop    │ -p scheduled-stop-706609 --schedule 15s                                                                                                                                                                                   │ scheduled-stop-706609       │ jenkins │ v1.37.0 │ 27 Oct 25 19:35 UTC │ 27 Oct 25 19:35 UTC │
	│ delete  │ -p scheduled-stop-706609                                                                                                                                                                                                  │ scheduled-stop-706609       │ jenkins │ v1.37.0 │ 27 Oct 25 19:35 UTC │ 27 Oct 25 19:35 UTC │
	│ start   │ -p insufficient-storage-321540 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                                                                                                          │ insufficient-storage-321540 │ jenkins │ v1.37.0 │ 27 Oct 25 19:35 UTC │                     │
	│ delete  │ -p insufficient-storage-321540                                                                                                                                                                                            │ insufficient-storage-321540 │ jenkins │ v1.37.0 │ 27 Oct 25 19:36 UTC │ 27 Oct 25 19:36 UTC │
	│ start   │ -p pause-249140 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                                                                                                 │ pause-249140                │ jenkins │ v1.37.0 │ 27 Oct 25 19:36 UTC │ 27 Oct 25 19:36 UTC │
	│ start   │ -p force-systemd-env-282715 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                │ force-systemd-env-282715    │ jenkins │ v1.37.0 │ 27 Oct 25 19:36 UTC │ 27 Oct 25 19:36 UTC │
	│ start   │ -p offline-crio-221701 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                                                                                                         │ offline-crio-221701         │ jenkins │ v1.37.0 │ 27 Oct 25 19:36 UTC │ 27 Oct 25 19:36 UTC │
	│ start   │ -p force-systemd-flag-422872 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                               │ force-systemd-flag-422872   │ jenkins │ v1.37.0 │ 27 Oct 25 19:36 UTC │ 27 Oct 25 19:36 UTC │
	│ delete  │ -p force-systemd-env-282715                                                                                                                                                                                               │ force-systemd-env-282715    │ jenkins │ v1.37.0 │ 27 Oct 25 19:36 UTC │ 27 Oct 25 19:36 UTC │
	│ start   │ -p cert-expiration-368442 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                    │ cert-expiration-368442      │ jenkins │ v1.37.0 │ 27 Oct 25 19:36 UTC │ 27 Oct 25 19:37 UTC │
	│ ssh     │ force-systemd-flag-422872 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                                                                                                      │ force-systemd-flag-422872   │ jenkins │ v1.37.0 │ 27 Oct 25 19:36 UTC │ 27 Oct 25 19:36 UTC │
	│ delete  │ -p force-systemd-flag-422872                                                                                                                                                                                              │ force-systemd-flag-422872   │ jenkins │ v1.37.0 │ 27 Oct 25 19:36 UTC │ 27 Oct 25 19:36 UTC │
	│ start   │ -p cert-options-638768 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio │ cert-options-638768         │ jenkins │ v1.37.0 │ 27 Oct 25 19:36 UTC │ 27 Oct 25 19:37 UTC │
	│ start   │ -p pause-249140 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                          │ pause-249140                │ jenkins │ v1.37.0 │ 27 Oct 25 19:36 UTC │ 27 Oct 25 19:37 UTC │
	│ delete  │ -p offline-crio-221701                                                                                                                                                                                                    │ offline-crio-221701         │ jenkins │ v1.37.0 │ 27 Oct 25 19:36 UTC │ 27 Oct 25 19:36 UTC │
	│ start   │ -p missing-upgrade-345161 --memory=3072 --driver=docker  --container-runtime=crio                                                                                                                                         │ missing-upgrade-345161      │ jenkins │ v1.32.0 │ 27 Oct 25 19:37 UTC │                     │
	│ pause   │ -p pause-249140 --alsologtostderr -v=5                                                                                                                                                                                    │ pause-249140                │ jenkins │ v1.37.0 │ 27 Oct 25 19:37 UTC │                     │
	│ ssh     │ cert-options-638768 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                               │ cert-options-638768         │ jenkins │ v1.37.0 │ 27 Oct 25 19:37 UTC │ 27 Oct 25 19:37 UTC │
	│ ssh     │ -p cert-options-638768 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                             │ cert-options-638768         │ jenkins │ v1.37.0 │ 27 Oct 25 19:37 UTC │ 27 Oct 25 19:37 UTC │
	│ delete  │ -p cert-options-638768                                                                                                                                                                                                    │ cert-options-638768         │ jenkins │ v1.37.0 │ 27 Oct 25 19:37 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 19:37:00
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 19:37:00.474478  541957 out.go:296] Setting OutFile to fd 1 ...
	I1027 19:37:00.474657  541957 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1027 19:37:00.474663  541957 out.go:309] Setting ErrFile to fd 2...
	I1027 19:37:00.474669  541957 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1027 19:37:00.474931  541957 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 19:37:00.475516  541957 out.go:303] Setting JSON to false
	I1027 19:37:00.476977  541957 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8369,"bootTime":1761585451,"procs":309,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 19:37:00.477044  541957 start.go:138] virtualization: kvm guest
	I1027 19:37:00.479463  541957 out.go:177] * [missing-upgrade-345161] minikube v1.32.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 19:37:00.481799  541957 out.go:177]   - MINIKUBE_LOCATION=21801
	I1027 19:37:00.481839  541957 notify.go:220] Checking for updates...
	I1027 19:37:00.486707  541957 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 19:37:00.487885  541957 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:37:00.491854  541957 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-352833/.minikube
	I1027 19:37:00.493380  541957 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 19:37:00.494861  541957 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 19:37:00.496974  541957 config.go:182] Loaded profile config "cert-expiration-368442": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:37:00.497129  541957 config.go:182] Loaded profile config "cert-options-638768": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:37:00.497335  541957 config.go:182] Loaded profile config "pause-249140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:37:00.497452  541957 driver.go:378] Setting default libvirt URI to qemu:///system
	I1027 19:37:00.528832  541957 docker.go:122] docker version: linux-28.5.1:Docker Engine - Community
	I1027 19:37:00.528961  541957 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:37:00.569061  541957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/last_update_check: {Name:mke3f866af514ce2abb811772b393ca67a8a2fe8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:37:00.573446  541957 out.go:177] * minikube 1.37.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.37.0
	I1027 19:37:00.575723  541957 out.go:177] * To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	I1027 19:37:00.601823  541957 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-27 19:37:00.588989899 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:37:00.601971  541957 docker.go:295] overlay module found
	I1027 19:37:00.603850  541957 out.go:177] * Using the docker driver based on user configuration
	I1027 19:36:57.147771  540831 out.go:252] * Updating the running docker "pause-249140" container ...
	I1027 19:36:57.147836  540831 machine.go:93] provisionDockerMachine start ...
	I1027 19:36:57.147929  540831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-249140
	I1027 19:36:57.173511  540831 main.go:141] libmachine: Using SSH client type: native
	I1027 19:36:57.173931  540831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33360 <nil> <nil>}
	I1027 19:36:57.173951  540831 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 19:36:57.331636  540831 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-249140
	
	I1027 19:36:57.332885  540831 ubuntu.go:182] provisioning hostname "pause-249140"
	I1027 19:36:57.332990  540831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-249140
	I1027 19:36:57.358051  540831 main.go:141] libmachine: Using SSH client type: native
	I1027 19:36:57.358427  540831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33360 <nil> <nil>}
	I1027 19:36:57.358453  540831 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-249140 && echo "pause-249140" | sudo tee /etc/hostname
	I1027 19:36:57.543331  540831 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-249140
	
	I1027 19:36:57.543460  540831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-249140
	I1027 19:36:57.566417  540831 main.go:141] libmachine: Using SSH client type: native
	I1027 19:36:57.566776  540831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33360 <nil> <nil>}
	I1027 19:36:57.566801  540831 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-249140' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-249140/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-249140' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 19:36:57.718185  540831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 19:36:57.718226  540831 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21801-352833/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-352833/.minikube}
	I1027 19:36:57.718254  540831 ubuntu.go:190] setting up certificates
	I1027 19:36:57.718271  540831 provision.go:84] configureAuth start
	I1027 19:36:57.718346  540831 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-249140
	I1027 19:36:57.745346  540831 provision.go:143] copyHostCerts
	I1027 19:36:57.745432  540831 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem, removing ...
	I1027 19:36:57.745453  540831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem
	I1027 19:36:57.745539  540831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem (1078 bytes)
	I1027 19:36:57.745804  540831 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem, removing ...
	I1027 19:36:57.745821  540831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem
	I1027 19:36:57.745862  540831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem (1123 bytes)
	I1027 19:36:57.745950  540831 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem, removing ...
	I1027 19:36:57.745972  540831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem
	I1027 19:36:57.746009  540831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem (1679 bytes)
	I1027 19:36:57.746079  540831 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem org=jenkins.pause-249140 san=[127.0.0.1 192.168.85.2 localhost minikube pause-249140]
	I1027 19:36:57.840076  540831 provision.go:177] copyRemoteCerts
	I1027 19:36:57.840160  540831 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 19:36:57.840206  540831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-249140
	I1027 19:36:57.866752  540831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33360 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/pause-249140/id_rsa Username:docker}
	I1027 19:36:57.976345  540831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 19:36:58.005503  540831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1027 19:36:58.028851  540831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1027 19:36:58.052167  540831 provision.go:87] duration metric: took 333.878373ms to configureAuth
	I1027 19:36:58.052201  540831 ubuntu.go:206] setting minikube options for container-runtime
	I1027 19:36:58.052469  540831 config.go:182] Loaded profile config "pause-249140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:36:58.052598  540831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-249140
	I1027 19:36:58.078386  540831 main.go:141] libmachine: Using SSH client type: native
	I1027 19:36:58.078755  540831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33360 <nil> <nil>}
	I1027 19:36:58.078780  540831 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 19:36:58.468658  540831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 19:36:58.468688  540831 machine.go:96] duration metric: took 1.320841118s to provisionDockerMachine
	I1027 19:36:58.468703  540831 start.go:293] postStartSetup for "pause-249140" (driver="docker")
	I1027 19:36:58.468717  540831 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 19:36:58.468783  540831 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 19:36:58.468843  540831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-249140
	I1027 19:36:58.498376  540831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33360 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/pause-249140/id_rsa Username:docker}
	I1027 19:36:58.622114  540831 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 19:36:58.628150  540831 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 19:36:58.628186  540831 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 19:36:58.628201  540831 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-352833/.minikube/addons for local assets ...
	I1027 19:36:58.628266  540831 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-352833/.minikube/files for local assets ...
	I1027 19:36:58.628417  540831 filesync.go:149] local asset: /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem -> 3564152.pem in /etc/ssl/certs
	I1027 19:36:58.628541  540831 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 19:36:58.639507  540831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem --> /etc/ssl/certs/3564152.pem (1708 bytes)
	I1027 19:36:58.664721  540831 start.go:296] duration metric: took 195.998657ms for postStartSetup
	I1027 19:36:58.664811  540831 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 19:36:58.664866  540831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-249140
	I1027 19:36:58.689912  540831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33360 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/pause-249140/id_rsa Username:docker}
	I1027 19:36:58.804043  540831 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 19:36:58.809622  540831 fix.go:56] duration metric: took 1.688757512s for fixHost
	I1027 19:36:58.809659  540831 start.go:83] releasing machines lock for "pause-249140", held for 1.688822938s
	I1027 19:36:58.809767  540831 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-249140
	I1027 19:36:58.838499  540831 ssh_runner.go:195] Run: cat /version.json
	I1027 19:36:58.838579  540831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-249140
	I1027 19:36:58.838577  540831 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 19:36:58.838669  540831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-249140
	I1027 19:36:58.876710  540831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33360 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/pause-249140/id_rsa Username:docker}
	I1027 19:36:58.881500  540831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33360 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/pause-249140/id_rsa Username:docker}
	I1027 19:36:59.023681  540831 ssh_runner.go:195] Run: systemctl --version
	I1027 19:36:59.112886  540831 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 19:36:59.171573  540831 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 19:36:59.179120  540831 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 19:36:59.179523  540831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 19:36:59.191305  540831 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 19:36:59.191346  540831 start.go:495] detecting cgroup driver to use...
	I1027 19:36:59.191385  540831 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 19:36:59.191444  540831 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 19:36:59.212766  540831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 19:36:59.234549  540831 docker.go:218] disabling cri-docker service (if available) ...
	I1027 19:36:59.234620  540831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 19:36:59.258566  540831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 19:36:59.276038  540831 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 19:36:59.441754  540831 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 19:36:59.569403  540831 docker.go:234] disabling docker service ...
	I1027 19:36:59.569488  540831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 19:36:59.586591  540831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 19:36:59.602218  540831 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 19:36:59.732223  540831 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 19:36:59.850894  540831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 19:36:59.867171  540831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 19:36:59.884292  540831 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 19:36:59.884365  540831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:36:59.896091  540831 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 19:36:59.896182  540831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:36:59.907800  540831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:36:59.919130  540831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:36:59.929990  540831 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 19:36:59.941180  540831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:36:59.953872  540831 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:36:59.963765  540831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:36:59.974693  540831 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 19:36:59.984502  540831 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 19:36:59.993961  540831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:37:00.159904  540831 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 19:37:00.550789  540831 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 19:37:00.550895  540831 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 19:37:00.557111  540831 start.go:563] Will wait 60s for crictl version
	I1027 19:37:00.557368  540831 ssh_runner.go:195] Run: which crictl
	I1027 19:37:00.562638  540831 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 19:37:00.598455  540831 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 19:37:00.598558  540831 ssh_runner.go:195] Run: crio --version
	I1027 19:37:00.639740  540831 ssh_runner.go:195] Run: crio --version
	I1027 19:37:00.684118  540831 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 19:37:00.605643  541957 start.go:298] selected driver: docker
	I1027 19:37:00.605657  541957 start.go:902] validating driver "docker" against <nil>
	I1027 19:37:00.605673  541957 start.go:913] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 19:37:00.606602  541957 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:37:00.683677  541957 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-27 19:37:00.671550262 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:37:00.683894  541957 start_flags.go:309] no existing cluster config was found, will generate one from the flags 
	I1027 19:37:00.684113  541957 start_flags.go:913] Wait components to verify : map[apiserver:true system_pods:true]
	I1027 19:37:00.686214  541957 out.go:177] * Using Docker driver with root privileges
	I1027 19:37:00.687500  541957 cni.go:84] Creating CNI manager for ""
	I1027 19:37:00.687518  541957 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:37:00.687532  541957 start_flags.go:318] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 19:37:00.687548  541957 start_flags.go:323] config:
	{Name:missing-upgrade-345161 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:missing-upgrade-345161 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1027 19:37:00.689153  541957 out.go:177] * Starting control plane node missing-upgrade-345161 in cluster missing-upgrade-345161
	I1027 19:37:00.690619  541957 cache.go:121] Beginning downloading kic base image for docker with crio
	I1027 19:37:00.692111  541957 out.go:177] * Pulling base image ...
	I1027 19:37:00.693410  541957 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1027 19:37:00.693506  541957 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local docker daemon
	I1027 19:37:00.714531  541957 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 to local cache
	I1027 19:37:00.714750  541957 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 in local cache directory
	I1027 19:37:00.714783  541957 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 to local cache
	I1027 19:37:00.724647  541957 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1027 19:37:00.724675  541957 cache.go:56] Caching tarball of preloaded images
	I1027 19:37:00.724845  541957 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1027 19:37:00.726766  541957 out.go:177] * Downloading Kubernetes v1.28.3 preload ...
	I1027 19:37:00.685552  540831 cli_runner.go:164] Run: docker network inspect pause-249140 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 19:37:00.707733  540831 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1027 19:37:00.713564  540831 kubeadm.go:883] updating cluster {Name:pause-249140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-249140 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 19:37:00.713756  540831 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:37:00.713827  540831 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 19:37:00.756808  540831 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 19:37:00.756830  540831 crio.go:433] Images already preloaded, skipping extraction
	I1027 19:37:00.756887  540831 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 19:37:00.788381  540831 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 19:37:00.788408  540831 cache_images.go:85] Images are preloaded, skipping loading
	I1027 19:37:00.788419  540831 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1027 19:37:00.788559  540831 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-249140 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-249140 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 19:37:00.788645  540831 ssh_runner.go:195] Run: crio config
	I1027 19:37:00.848546  540831 cni.go:84] Creating CNI manager for ""
	I1027 19:37:00.848571  540831 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:37:00.848590  540831 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 19:37:00.848614  540831 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-249140 NodeName:pause-249140 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 19:37:00.848774  540831 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-249140"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 19:37:00.848856  540831 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 19:37:00.860084  540831 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 19:37:00.860178  540831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 19:37:00.872191  540831 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1027 19:37:00.891234  540831 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 19:37:00.911565  540831 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1027 19:37:00.933714  540831 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1027 19:37:00.940653  540831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:37:01.130373  540831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:37:01.147750  540831 certs.go:69] Setting up /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/pause-249140 for IP: 192.168.85.2
	I1027 19:37:01.147780  540831 certs.go:195] generating shared ca certs ...
	I1027 19:37:01.147848  540831 certs.go:227] acquiring lock for ca certs: {Name:mk4bdbca32068f6f817fc35fdc496e961dc3e0d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:37:01.148021  540831 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21801-352833/.minikube/ca.key
	I1027 19:37:01.148098  540831 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.key
	I1027 19:37:01.148120  540831 certs.go:257] generating profile certs ...
	I1027 19:37:01.148287  540831 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/pause-249140/client.key
	I1027 19:37:01.148437  540831 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/pause-249140/apiserver.key.379a31ff
	I1027 19:37:01.148505  540831 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/pause-249140/proxy-client.key
	I1027 19:37:01.148668  540831 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/356415.pem (1338 bytes)
	W1027 19:37:01.148716  540831 certs.go:480] ignoring /home/jenkins/minikube-integration/21801-352833/.minikube/certs/356415_empty.pem, impossibly tiny 0 bytes
	I1027 19:37:01.148731  540831 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 19:37:01.148768  540831 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem (1078 bytes)
	I1027 19:37:01.148799  540831 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem (1123 bytes)
	I1027 19:37:01.148837  540831 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem (1679 bytes)
	I1027 19:37:01.148899  540831 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem (1708 bytes)
	I1027 19:37:01.149772  540831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 19:37:01.174155  540831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 19:37:01.198387  540831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 19:37:01.221001  540831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1027 19:37:01.249425  540831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/pause-249140/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1027 19:37:01.280257  540831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/pause-249140/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 19:37:01.315718  540831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/pause-249140/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 19:37:01.346274  540831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/pause-249140/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 19:37:01.374052  540831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/certs/356415.pem --> /usr/share/ca-certificates/356415.pem (1338 bytes)
	I1027 19:37:01.407365  540831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem --> /usr/share/ca-certificates/3564152.pem (1708 bytes)
	I1027 19:37:01.439147  540831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 19:37:01.473618  540831 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 19:37:01.498806  540831 ssh_runner.go:195] Run: openssl version
	I1027 19:37:01.508549  540831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 19:37:01.522213  540831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:37:01.529097  540831 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:37:01.529236  540831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:37:01.588229  540831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 19:37:01.600078  540831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/356415.pem && ln -fs /usr/share/ca-certificates/356415.pem /etc/ssl/certs/356415.pem"
	I1027 19:37:01.620497  540831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356415.pem
	I1027 19:37:01.626951  540831 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 19:02 /usr/share/ca-certificates/356415.pem
	I1027 19:37:01.627018  540831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356415.pem
	I1027 19:37:01.687327  540831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/356415.pem /etc/ssl/certs/51391683.0"
	I1027 19:37:01.699539  540831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3564152.pem && ln -fs /usr/share/ca-certificates/3564152.pem /etc/ssl/certs/3564152.pem"
	I1027 19:37:01.712941  540831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3564152.pem
	I1027 19:37:01.722987  540831 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 19:02 /usr/share/ca-certificates/3564152.pem
	I1027 19:37:01.723056  540831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3564152.pem
	I1027 19:37:01.784522  540831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3564152.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 19:37:01.799300  540831 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 19:37:01.809749  540831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 19:37:01.876415  540831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 19:37:01.942219  540831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 19:37:02.011634  540831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 19:37:02.084540  540831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 19:37:02.149946  540831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 19:37:02.213860  540831 kubeadm.go:400] StartCluster: {Name:pause-249140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-249140 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:37:02.214030  540831 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 19:37:02.214096  540831 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 19:37:02.265023  540831 cri.go:89] found id: "a5d43c06cdefd0fd790cb0418ec7193d78de34b9aa196d7434e89fa6e058a9e2"
	I1027 19:37:02.265053  540831 cri.go:89] found id: "23504db13cbd1fd12a985de0d72ca202ac317afa3c2b2e13010bc502e000e818"
	I1027 19:37:02.265059  540831 cri.go:89] found id: "01bf760b3e7b21a98d5df158a80b1c0b879013421d7c5e47ff7903915caf96a9"
	I1027 19:37:02.265063  540831 cri.go:89] found id: "d69010095e3eba77e809b777fa9e622cf5c9528a2eab5611100fa5eed6283461"
	I1027 19:37:02.265066  540831 cri.go:89] found id: "c5b2eb2a54f889f17b3db8afb09c190f60784cb1f08c460017039d3d947aeaaf"
	I1027 19:37:02.265070  540831 cri.go:89] found id: "01712f1073762c52020031153783123eaffdca1ca62a7f9798f8eee04cb57fd9"
	I1027 19:37:02.265074  540831 cri.go:89] found id: "e4828303cd2a90f2436dec99343b7ffa44a1eb586b82513fc0a7a01f1a37cd0d"
	I1027 19:37:02.265078  540831 cri.go:89] found id: ""
	I1027 19:37:02.265156  540831 ssh_runner.go:195] Run: sudo runc list -f json
	W1027 19:37:02.283542  540831 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:37:02Z" level=error msg="open /run/runc: no such file or directory"
	I1027 19:37:02.283633  540831 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 19:37:02.296916  540831 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1027 19:37:02.296939  540831 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1027 19:37:02.296989  540831 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 19:37:02.307819  540831 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 19:37:02.308602  540831 kubeconfig.go:125] found "pause-249140" server: "https://192.168.85.2:8443"
	I1027 19:37:02.309600  540831 kapi.go:59] client config for pause-249140: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21801-352833/.minikube/profiles/pause-249140/client.crt", KeyFile:"/home/jenkins/minikube-integration/21801-352833/.minikube/profiles/pause-249140/client.key", CAFile:"/home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c4e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1027 19:37:02.310322  540831 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1027 19:37:02.310346  540831 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1027 19:37:02.310354  540831 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1027 19:37:02.310360  540831 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1027 19:37:02.310380  540831 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1027 19:37:02.310968  540831 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 19:37:02.324729  540831 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1027 19:37:02.324774  540831 kubeadm.go:601] duration metric: took 27.828375ms to restartPrimaryControlPlane
	I1027 19:37:02.324789  540831 kubeadm.go:402] duration metric: took 110.94009ms to StartCluster
	I1027 19:37:02.324811  540831 settings.go:142] acquiring lock: {Name:mk8304c2106bf310642e0949fc0266ccb50f2f95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:37:02.324891  540831 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:37:02.334349  540831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/kubeconfig: {Name:mk24cbe512a6907c874f3fb7a85390a8f9fd2b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:37:02.334719  540831 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 19:37:02.334962  540831 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 19:37:02.335306  540831 config.go:182] Loaded profile config "pause-249140": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:37:02.336699  540831 out.go:179] * Verifying Kubernetes components...
	I1027 19:37:02.336770  540831 out.go:179] * Enabled addons: 
	I1027 19:37:02.791769  534866 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1027 19:37:02.791831  534866 kubeadm.go:318] [preflight] Running pre-flight checks
	I1027 19:37:02.792005  534866 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1027 19:37:02.792100  534866 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1027 19:37:02.792240  534866 kubeadm.go:318] OS: Linux
	I1027 19:37:02.792324  534866 kubeadm.go:318] CGROUPS_CPU: enabled
	I1027 19:37:02.792418  534866 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1027 19:37:02.792496  534866 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1027 19:37:02.792561  534866 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1027 19:37:02.792633  534866 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1027 19:37:02.792719  534866 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1027 19:37:02.792791  534866 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1027 19:37:02.792853  534866 kubeadm.go:318] CGROUPS_IO: enabled
	I1027 19:37:02.793055  534866 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 19:37:02.793215  534866 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 19:37:02.793335  534866 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 19:37:02.793412  534866 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 19:37:02.795372  534866 out.go:252]   - Generating certificates and keys ...
	I1027 19:37:02.795506  534866 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1027 19:37:02.795589  534866 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1027 19:37:02.795703  534866 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 19:37:02.795782  534866 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1027 19:37:02.795877  534866 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1027 19:37:02.795948  534866 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1027 19:37:02.796033  534866 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1027 19:37:02.796220  534866 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-368442 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1027 19:37:02.796287  534866 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1027 19:37:02.796421  534866 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-368442 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1027 19:37:02.796503  534866 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 19:37:02.796564  534866 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 19:37:02.796621  534866 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1027 19:37:02.796686  534866 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 19:37:02.796741  534866 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 19:37:02.796811  534866 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 19:37:02.796880  534866 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 19:37:02.796959  534866 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 19:37:02.797000  534866 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 19:37:02.797061  534866 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 19:37:02.797126  534866 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 19:37:02.798837  534866 out.go:252]   - Booting up control plane ...
	I1027 19:37:02.798959  534866 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 19:37:02.799063  534866 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 19:37:02.799177  534866 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 19:37:02.799310  534866 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 19:37:02.799431  534866 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 19:37:02.799671  534866 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 19:37:02.799767  534866 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 19:37:02.799833  534866 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1027 19:37:02.800003  534866 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 19:37:02.800192  534866 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 19:37:02.800282  534866 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.002722151s
	I1027 19:37:02.800454  534866 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 19:37:02.800614  534866 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1027 19:37:02.800746  534866 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 19:37:02.800848  534866 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 19:37:02.800933  534866 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.771751748s
	I1027 19:37:02.801038  534866 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 3.622287173s
	I1027 19:37:02.801123  534866 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.004245349s
	I1027 19:37:02.801331  534866 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 19:37:02.801585  534866 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 19:37:02.801639  534866 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 19:37:02.801866  534866 kubeadm.go:318] [mark-control-plane] Marking the node cert-expiration-368442 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 19:37:02.801912  534866 kubeadm.go:318] [bootstrap-token] Using token: csf2z3.qsf9sz9ro4wba57t
	I1027 19:37:02.803734  534866 out.go:252]   - Configuring RBAC rules ...
	I1027 19:37:02.803964  534866 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 19:37:02.804086  534866 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 19:37:02.804312  534866 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 19:37:02.804495  534866 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 19:37:02.804693  534866 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 19:37:02.804764  534866 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 19:37:02.804914  534866 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 19:37:02.804958  534866 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1027 19:37:02.805011  534866 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1027 19:37:02.805015  534866 kubeadm.go:318] 
	I1027 19:37:02.805107  534866 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1027 19:37:02.805161  534866 kubeadm.go:318] 
	I1027 19:37:02.805308  534866 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1027 19:37:02.805313  534866 kubeadm.go:318] 
	I1027 19:37:02.805365  534866 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1027 19:37:02.805466  534866 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 19:37:02.805539  534866 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 19:37:02.805548  534866 kubeadm.go:318] 
	I1027 19:37:02.805615  534866 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1027 19:37:02.805625  534866 kubeadm.go:318] 
	I1027 19:37:02.805691  534866 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 19:37:02.805695  534866 kubeadm.go:318] 
	I1027 19:37:02.805766  534866 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1027 19:37:02.805885  534866 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 19:37:02.805981  534866 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 19:37:02.805991  534866 kubeadm.go:318] 
	I1027 19:37:02.806095  534866 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 19:37:02.806233  534866 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1027 19:37:02.806243  534866 kubeadm.go:318] 
	I1027 19:37:02.806456  534866 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token csf2z3.qsf9sz9ro4wba57t \
	I1027 19:37:02.806585  534866 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ab29e81999671591f366788f5ae9ffb132789ebc71f7c0efdaecd38575a5ab6a \
	I1027 19:37:02.806611  534866 kubeadm.go:318] 	--control-plane 
	I1027 19:37:02.806616  534866 kubeadm.go:318] 
	I1027 19:37:02.806763  534866 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1027 19:37:02.806769  534866 kubeadm.go:318] 
	I1027 19:37:02.806882  534866 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token csf2z3.qsf9sz9ro4wba57t \
	I1027 19:37:02.807056  534866 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ab29e81999671591f366788f5ae9ffb132789ebc71f7c0efdaecd38575a5ab6a 
	I1027 19:37:02.807065  534866 cni.go:84] Creating CNI manager for ""
	I1027 19:37:02.807091  534866 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:37:02.808966  534866 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1027 19:36:59.240397  536851 out.go:252]   - Booting up control plane ...
	I1027 19:36:59.240517  536851 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 19:36:59.240647  536851 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 19:36:59.241876  536851 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 19:36:59.262768  536851 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 19:36:59.262896  536851 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 19:36:59.273399  536851 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 19:36:59.273752  536851 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 19:36:59.273823  536851 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1027 19:36:59.412468  536851 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 19:36:59.412612  536851 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 19:37:00.416106  536851 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.00214982s
	I1027 19:37:00.421832  536851 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 19:37:00.421940  536851 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8555/livez
	I1027 19:37:00.422047  536851 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 19:37:00.422152  536851 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 19:37:02.156899  536851 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.735219419s
	I1027 19:37:02.810463  534866 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 19:37:02.818662  534866 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 19:37:02.818676  534866 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1027 19:37:02.841486  534866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 19:37:03.178316  534866 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 19:37:03.178424  534866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:37:03.178490  534866 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes cert-expiration-368442 minikube.k8s.io/updated_at=2025_10_27T19_37_03_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f minikube.k8s.io/name=cert-expiration-368442 minikube.k8s.io/primary=true
	I1027 19:37:03.197474  534866 ops.go:34] apiserver oom_adj: -16
	I1027 19:37:03.347999  534866 kubeadm.go:1113] duration metric: took 169.651748ms to wait for elevateKubeSystemPrivileges
	I1027 19:37:03.348026  534866 kubeadm.go:402] duration metric: took 11.774226305s to StartCluster
	I1027 19:37:03.348047  534866 settings.go:142] acquiring lock: {Name:mk8304c2106bf310642e0949fc0266ccb50f2f95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:37:03.348157  534866 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:37:03.349661  534866 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/kubeconfig: {Name:mk24cbe512a6907c874f3fb7a85390a8f9fd2b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:37:03.349941  534866 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 19:37:03.350115  534866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 19:37:03.350150  534866 config.go:182] Loaded profile config "cert-expiration-368442": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:37:03.350204  534866 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 19:37:03.350294  534866 addons.go:69] Setting storage-provisioner=true in profile "cert-expiration-368442"
	I1027 19:37:03.350328  534866 addons.go:238] Setting addon storage-provisioner=true in "cert-expiration-368442"
	I1027 19:37:03.350366  534866 host.go:66] Checking if "cert-expiration-368442" exists ...
	I1027 19:37:03.350452  534866 addons.go:69] Setting default-storageclass=true in profile "cert-expiration-368442"
	I1027 19:37:03.350476  534866 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-368442"
	I1027 19:37:03.350887  534866 cli_runner.go:164] Run: docker container inspect cert-expiration-368442 --format={{.State.Status}}
	I1027 19:37:03.351662  534866 cli_runner.go:164] Run: docker container inspect cert-expiration-368442 --format={{.State.Status}}
	I1027 19:37:03.351969  534866 out.go:179] * Verifying Kubernetes components...
	I1027 19:37:03.353648  534866 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:37:03.396322  534866 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:37:03.397086  534866 addons.go:238] Setting addon default-storageclass=true in "cert-expiration-368442"
	I1027 19:37:03.397119  534866 host.go:66] Checking if "cert-expiration-368442" exists ...
	I1027 19:37:03.397641  534866 cli_runner.go:164] Run: docker container inspect cert-expiration-368442 --format={{.State.Status}}
	I1027 19:37:03.398290  534866 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:37:03.398312  534866 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 19:37:03.398371  534866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-368442
	I1027 19:37:03.437812  534866 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 19:37:03.437829  534866 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 19:37:03.437901  534866 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-368442
	I1027 19:37:03.447441  534866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33365 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/cert-expiration-368442/id_rsa Username:docker}
	I1027 19:37:03.475523  534866 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33365 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/cert-expiration-368442/id_rsa Username:docker}
	I1027 19:37:03.523646  534866 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 19:37:03.576292  534866 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:37:03.624996  534866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:37:03.643500  534866 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 19:37:03.780051  534866 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1027 19:37:03.782822  534866 api_server.go:52] waiting for apiserver process to appear ...
	I1027 19:37:03.782880  534866 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:37:04.054971  534866 api_server.go:72] duration metric: took 704.995718ms to wait for apiserver process to appear ...
	I1027 19:37:04.054993  534866 api_server.go:88] waiting for apiserver healthz status ...
	I1027 19:37:04.055026  534866 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 19:37:04.061346  534866 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1027 19:37:04.062504  534866 api_server.go:141] control plane version: v1.34.1
	I1027 19:37:04.062524  534866 api_server.go:131] duration metric: took 7.524705ms to wait for apiserver health ...
	I1027 19:37:04.062541  534866 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 19:37:04.065657  534866 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1027 19:37:02.338184  540831 addons.go:514] duration metric: took 3.269587ms for enable addons: enabled=[]
	I1027 19:37:02.338283  540831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:37:02.531543  540831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:37:02.551079  540831 node_ready.go:35] waiting up to 6m0s for node "pause-249140" to be "Ready" ...
	I1027 19:37:02.562122  540831 node_ready.go:49] node "pause-249140" is "Ready"
	I1027 19:37:02.562170  540831 node_ready.go:38] duration metric: took 11.008162ms for node "pause-249140" to be "Ready" ...
	I1027 19:37:02.562187  540831 api_server.go:52] waiting for apiserver process to appear ...
	I1027 19:37:02.562245  540831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:37:02.582645  540831 api_server.go:72] duration metric: took 247.857554ms to wait for apiserver process to appear ...
	I1027 19:37:02.582694  540831 api_server.go:88] waiting for apiserver healthz status ...
	I1027 19:37:02.582721  540831 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1027 19:37:02.589249  540831 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1027 19:37:02.590416  540831 api_server.go:141] control plane version: v1.34.1
	I1027 19:37:02.590449  540831 api_server.go:131] duration metric: took 7.746088ms to wait for apiserver health ...
	I1027 19:37:02.590461  540831 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 19:37:02.594394  540831 system_pods.go:59] 7 kube-system pods found
	I1027 19:37:02.594438  540831 system_pods.go:61] "coredns-66bc5c9577-zw67w" [0cb0e471-ebc9-46c0-b1fa-01239c268b53] Running
	I1027 19:37:02.594447  540831 system_pods.go:61] "etcd-pause-249140" [115e31fc-5812-4d70-8afa-93829d9571b8] Running
	I1027 19:37:02.594454  540831 system_pods.go:61] "kindnet-8df8g" [c60385cd-2c72-418e-b71f-de147e042619] Running
	I1027 19:37:02.594460  540831 system_pods.go:61] "kube-apiserver-pause-249140" [3fb5fb24-e2e4-4b84-a8c0-5a3132562289] Running
	I1027 19:37:02.594465  540831 system_pods.go:61] "kube-controller-manager-pause-249140" [e7aadb55-e49a-42d0-b3d8-46f7588c7dc2] Running
	I1027 19:37:02.594471  540831 system_pods.go:61] "kube-proxy-brj24" [921a42f0-4e87-4a36-8436-4716703e03d7] Running
	I1027 19:37:02.594477  540831 system_pods.go:61] "kube-scheduler-pause-249140" [e2971147-d4ea-47b1-abc1-30c84376bf08] Running
	I1027 19:37:02.594486  540831 system_pods.go:74] duration metric: took 4.016794ms to wait for pod list to return data ...
	I1027 19:37:02.594517  540831 default_sa.go:34] waiting for default service account to be created ...
	I1027 19:37:02.597015  540831 default_sa.go:45] found service account: "default"
	I1027 19:37:02.597050  540831 default_sa.go:55] duration metric: took 2.51699ms for default service account to be created ...
	I1027 19:37:02.597064  540831 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 19:37:02.600190  540831 system_pods.go:86] 7 kube-system pods found
	I1027 19:37:02.600222  540831 system_pods.go:89] "coredns-66bc5c9577-zw67w" [0cb0e471-ebc9-46c0-b1fa-01239c268b53] Running
	I1027 19:37:02.600230  540831 system_pods.go:89] "etcd-pause-249140" [115e31fc-5812-4d70-8afa-93829d9571b8] Running
	I1027 19:37:02.600236  540831 system_pods.go:89] "kindnet-8df8g" [c60385cd-2c72-418e-b71f-de147e042619] Running
	I1027 19:37:02.600242  540831 system_pods.go:89] "kube-apiserver-pause-249140" [3fb5fb24-e2e4-4b84-a8c0-5a3132562289] Running
	I1027 19:37:02.600248  540831 system_pods.go:89] "kube-controller-manager-pause-249140" [e7aadb55-e49a-42d0-b3d8-46f7588c7dc2] Running
	I1027 19:37:02.600254  540831 system_pods.go:89] "kube-proxy-brj24" [921a42f0-4e87-4a36-8436-4716703e03d7] Running
	I1027 19:37:02.600259  540831 system_pods.go:89] "kube-scheduler-pause-249140" [e2971147-d4ea-47b1-abc1-30c84376bf08] Running
	I1027 19:37:02.600270  540831 system_pods.go:126] duration metric: took 3.197331ms to wait for k8s-apps to be running ...
	I1027 19:37:02.600284  540831 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 19:37:02.600343  540831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:37:02.619675  540831 system_svc.go:56] duration metric: took 19.377729ms WaitForService to wait for kubelet
	I1027 19:37:02.619712  540831 kubeadm.go:586] duration metric: took 284.936146ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 19:37:02.619736  540831 node_conditions.go:102] verifying NodePressure condition ...
	I1027 19:37:02.625203  540831 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1027 19:37:02.625239  540831 node_conditions.go:123] node cpu capacity is 8
	I1027 19:37:02.625255  540831 node_conditions.go:105] duration metric: took 5.513759ms to run NodePressure ...
	I1027 19:37:02.625270  540831 start.go:241] waiting for startup goroutines ...
	I1027 19:37:02.625279  540831 start.go:246] waiting for cluster config update ...
	I1027 19:37:02.625287  540831 start.go:255] writing updated cluster config ...
	I1027 19:37:02.625693  540831 ssh_runner.go:195] Run: rm -f paused
	I1027 19:37:02.633087  540831 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 19:37:02.633779  540831 kapi.go:59] client config for pause-249140: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21801-352833/.minikube/profiles/pause-249140/client.crt", KeyFile:"/home/jenkins/minikube-integration/21801-352833/.minikube/profiles/pause-249140/client.key", CAFile:"/home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c4e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1027 19:37:02.638577  540831 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zw67w" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:37:02.643940  540831 pod_ready.go:94] pod "coredns-66bc5c9577-zw67w" is "Ready"
	I1027 19:37:02.643969  540831 pod_ready.go:86] duration metric: took 5.363692ms for pod "coredns-66bc5c9577-zw67w" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:37:02.646889  540831 pod_ready.go:83] waiting for pod "etcd-pause-249140" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:37:02.653031  540831 pod_ready.go:94] pod "etcd-pause-249140" is "Ready"
	I1027 19:37:02.653061  540831 pod_ready.go:86] duration metric: took 6.144214ms for pod "etcd-pause-249140" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:37:02.655720  540831 pod_ready.go:83] waiting for pod "kube-apiserver-pause-249140" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:37:02.661375  540831 pod_ready.go:94] pod "kube-apiserver-pause-249140" is "Ready"
	I1027 19:37:02.661413  540831 pod_ready.go:86] duration metric: took 5.659431ms for pod "kube-apiserver-pause-249140" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:37:02.665145  540831 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-249140" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:37:03.038985  540831 pod_ready.go:94] pod "kube-controller-manager-pause-249140" is "Ready"
	I1027 19:37:03.039022  540831 pod_ready.go:86] duration metric: took 373.851049ms for pod "kube-controller-manager-pause-249140" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:37:03.239704  540831 pod_ready.go:83] waiting for pod "kube-proxy-brj24" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:37:03.639995  540831 pod_ready.go:94] pod "kube-proxy-brj24" is "Ready"
	I1027 19:37:03.640094  540831 pod_ready.go:86] duration metric: took 400.294106ms for pod "kube-proxy-brj24" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:37:03.838037  540831 pod_ready.go:83] waiting for pod "kube-scheduler-pause-249140" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:37:04.238305  540831 pod_ready.go:94] pod "kube-scheduler-pause-249140" is "Ready"
	I1027 19:37:04.238345  540831 pod_ready.go:86] duration metric: took 400.278768ms for pod "kube-scheduler-pause-249140" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:37:04.238362  540831 pod_ready.go:40] duration metric: took 1.605228732s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 19:37:04.304653  540831 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 19:37:04.307108  540831 out.go:179] * Done! kubectl is now configured to use "pause-249140" cluster and "default" namespace by default
	I1027 19:37:04.067248  534866 addons.go:514] duration metric: took 717.03244ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1027 19:37:04.068601  534866 system_pods.go:59] 5 kube-system pods found
	I1027 19:37:04.068629  534866 system_pods.go:61] "etcd-cert-expiration-368442" [6db4177d-1cb1-44fc-96b3-1d498aa77503] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 19:37:04.068640  534866 system_pods.go:61] "kube-apiserver-cert-expiration-368442" [c62126bd-e920-44f9-a512-789de92f95af] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 19:37:04.068650  534866 system_pods.go:61] "kube-controller-manager-cert-expiration-368442" [cd1242be-b425-42c3-b5a5-4fe1b8469aaf] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 19:37:04.068658  534866 system_pods.go:61] "kube-scheduler-cert-expiration-368442" [08daea0d-d7c4-408c-b114-679e832fd107] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 19:37:04.068664  534866 system_pods.go:61] "storage-provisioner" [4bdd52aa-0af6-40dd-a74e-e4a6ea24801f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1027 19:37:04.068673  534866 system_pods.go:74] duration metric: took 6.097178ms to wait for pod list to return data ...
	I1027 19:37:04.068687  534866 kubeadm.go:586] duration metric: took 718.716671ms to wait for: map[apiserver:true system_pods:true]
	I1027 19:37:04.068700  534866 node_conditions.go:102] verifying NodePressure condition ...
	I1027 19:37:04.072470  534866 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1027 19:37:04.072492  534866 node_conditions.go:123] node cpu capacity is 8
	I1027 19:37:04.072526  534866 node_conditions.go:105] duration metric: took 3.821008ms to run NodePressure ...
	I1027 19:37:04.072542  534866 start.go:241] waiting for startup goroutines ...
	I1027 19:37:04.286015  534866 kapi.go:214] "coredns" deployment in "kube-system" namespace and "cert-expiration-368442" context rescaled to 1 replicas
	I1027 19:37:04.286048  534866 start.go:246] waiting for cluster config update ...
	I1027 19:37:04.286063  534866 start.go:255] writing updated cluster config ...
	I1027 19:37:04.286418  534866 ssh_runner.go:195] Run: rm -f paused
	I1027 19:37:04.362657  534866 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 19:37:04.364484  534866 out.go:179] * Done! kubectl is now configured to use "cert-expiration-368442" cluster and "default" namespace by default
	I1027 19:37:03.319408  536851 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.896641994s
	I1027 19:37:05.424504  536851 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.002976559s
	I1027 19:37:05.437300  536851 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 19:37:05.450198  536851 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 19:37:05.462679  536851 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 19:37:05.462995  536851 kubeadm.go:318] [mark-control-plane] Marking the node cert-options-638768 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 19:37:05.474196  536851 kubeadm.go:318] [bootstrap-token] Using token: pwnbmb.s8kqfw038b1ym7jv
	I1027 19:37:00.728193  541957 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 ...
	I1027 19:37:00.781720  541957 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.3/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4?checksum=md5:6681d82b7b719ef3324102b709ec62eb -> /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4
	I1027 19:37:03.422262  541957 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 ...
	I1027 19:37:03.422554  541957 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4 ...
	I1027 19:37:04.510087  541957 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.3 on crio
	I1027 19:37:04.510363  541957 profile.go:148] Saving config to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/missing-upgrade-345161/config.json ...
	I1027 19:37:04.510406  541957 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/missing-upgrade-345161/config.json: {Name:mkea8a6de8705a0768ede63f0a3af506fb5e41bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:37:04.792874  541957 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 as a tarball
	I1027 19:37:04.792893  541957 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 from local cache
	I1027 19:37:05.476097  536851 out.go:252]   - Configuring RBAC rules ...
	I1027 19:37:05.476326  536851 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 19:37:05.480513  536851 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 19:37:05.488302  536851 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 19:37:05.492390  536851 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 19:37:05.495924  536851 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 19:37:05.499576  536851 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 19:37:05.831700  536851 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 19:37:06.251283  536851 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1027 19:37:06.832203  536851 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1027 19:37:06.833124  536851 kubeadm.go:318] 
	I1027 19:37:06.833230  536851 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1027 19:37:06.833235  536851 kubeadm.go:318] 
	I1027 19:37:06.833334  536851 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1027 19:37:06.833367  536851 kubeadm.go:318] 
	I1027 19:37:06.833406  536851 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1027 19:37:06.833495  536851 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 19:37:06.833592  536851 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 19:37:06.833603  536851 kubeadm.go:318] 
	I1027 19:37:06.833674  536851 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1027 19:37:06.833678  536851 kubeadm.go:318] 
	I1027 19:37:06.833747  536851 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 19:37:06.833752  536851 kubeadm.go:318] 
	I1027 19:37:06.833822  536851 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1027 19:37:06.833938  536851 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 19:37:06.834028  536851 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 19:37:06.834033  536851 kubeadm.go:318] 
	I1027 19:37:06.834170  536851 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 19:37:06.834276  536851 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1027 19:37:06.834280  536851 kubeadm.go:318] 
	I1027 19:37:06.834358  536851 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8555 --token pwnbmb.s8kqfw038b1ym7jv \
	I1027 19:37:06.834443  536851 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ab29e81999671591f366788f5ae9ffb132789ebc71f7c0efdaecd38575a5ab6a \
	I1027 19:37:06.834470  536851 kubeadm.go:318] 	--control-plane 
	I1027 19:37:06.834473  536851 kubeadm.go:318] 
	I1027 19:37:06.834548  536851 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1027 19:37:06.834551  536851 kubeadm.go:318] 
	I1027 19:37:06.834618  536851 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8555 --token pwnbmb.s8kqfw038b1ym7jv \
	I1027 19:37:06.834703  536851 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ab29e81999671591f366788f5ae9ffb132789ebc71f7c0efdaecd38575a5ab6a 
	I1027 19:37:06.838531  536851 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1027 19:37:06.838667  536851 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 19:37:06.838873  536851 cni.go:84] Creating CNI manager for ""
	I1027 19:37:06.838883  536851 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:37:06.840916  536851 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1027 19:37:06.842150  536851 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 19:37:06.848007  536851 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 19:37:06.848022  536851 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1027 19:37:06.866725  536851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 19:37:07.191669  536851 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 19:37:07.191775  536851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:37:07.191849  536851 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes cert-options-638768 minikube.k8s.io/updated_at=2025_10_27T19_37_07_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f minikube.k8s.io/name=cert-options-638768 minikube.k8s.io/primary=true
	I1027 19:37:07.319894  536851 ops.go:34] apiserver oom_adj: -16
	I1027 19:37:07.319920  536851 kubeadm.go:1113] duration metric: took 128.234419ms to wait for elevateKubeSystemPrivileges
	I1027 19:37:07.319933  536851 kubeadm.go:402] duration metric: took 13.402652135s to StartCluster
	I1027 19:37:07.319959  536851 settings.go:142] acquiring lock: {Name:mk8304c2106bf310642e0949fc0266ccb50f2f95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:37:07.320047  536851 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:37:07.321812  536851 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/kubeconfig: {Name:mk24cbe512a6907c874f3fb7a85390a8f9fd2b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:37:07.322087  536851 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 19:37:07.322097  536851 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8555 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 19:37:07.322171  536851 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 19:37:07.322251  536851 addons.go:69] Setting storage-provisioner=true in profile "cert-options-638768"
	I1027 19:37:07.322264  536851 addons.go:238] Setting addon storage-provisioner=true in "cert-options-638768"
	I1027 19:37:07.322286  536851 host.go:66] Checking if "cert-options-638768" exists ...
	I1027 19:37:07.322368  536851 config.go:182] Loaded profile config "cert-options-638768": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:37:07.322425  536851 addons.go:69] Setting default-storageclass=true in profile "cert-options-638768"
	I1027 19:37:07.322439  536851 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-options-638768"
	I1027 19:37:07.322802  536851 cli_runner.go:164] Run: docker container inspect cert-options-638768 --format={{.State.Status}}
	I1027 19:37:07.322905  536851 cli_runner.go:164] Run: docker container inspect cert-options-638768 --format={{.State.Status}}
	I1027 19:37:07.324093  536851 out.go:179] * Verifying Kubernetes components...
	I1027 19:37:07.326215  536851 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:37:07.351190  536851 addons.go:238] Setting addon default-storageclass=true in "cert-options-638768"
	I1027 19:37:07.351235  536851 host.go:66] Checking if "cert-options-638768" exists ...
	I1027 19:37:07.351785  536851 cli_runner.go:164] Run: docker container inspect cert-options-638768 --format={{.State.Status}}
	I1027 19:37:07.353608  536851 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:37:07.355119  536851 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:37:07.355143  536851 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 19:37:07.355278  536851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-638768
	I1027 19:37:07.385731  536851 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 19:37:07.385748  536851 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 19:37:07.385823  536851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-options-638768
	I1027 19:37:07.390190  536851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33370 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/cert-options-638768/id_rsa Username:docker}
	I1027 19:37:07.415994  536851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33370 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/cert-options-638768/id_rsa Username:docker}
	I1027 19:37:07.444042  536851 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 19:37:07.508384  536851 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:37:07.542379  536851 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:37:07.554869  536851 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 19:37:07.686704  536851 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1027 19:37:07.688471  536851 api_server.go:52] waiting for apiserver process to appear ...
	I1027 19:37:07.688533  536851 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:37:07.933648  536851 api_server.go:72] duration metric: took 611.518685ms to wait for apiserver process to appear ...
	I1027 19:37:07.933668  536851 api_server.go:88] waiting for apiserver healthz status ...
	I1027 19:37:07.933688  536851 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8555/healthz ...
	I1027 19:37:07.939819  536851 api_server.go:279] https://192.168.94.2:8555/healthz returned 200:
	ok
	I1027 19:37:07.940829  536851 api_server.go:141] control plane version: v1.34.1
	I1027 19:37:07.940858  536851 api_server.go:131] duration metric: took 7.175991ms to wait for apiserver health ...
	I1027 19:37:07.940868  536851 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 19:37:07.945767  536851 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1027 19:37:07.947502  536851 addons.go:514] duration metric: took 625.3242ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1027 19:37:07.950787  536851 system_pods.go:59] 5 kube-system pods found
	I1027 19:37:07.950818  536851 system_pods.go:61] "etcd-cert-options-638768" [273e3e83-1fb8-4239-95dd-ed6f17b5773e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 19:37:07.950830  536851 system_pods.go:61] "kube-apiserver-cert-options-638768" [56006940-041a-4e3b-82e8-2dafa2bba80e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 19:37:07.950839  536851 system_pods.go:61] "kube-controller-manager-cert-options-638768" [708b59bb-7cef-488d-86be-214e560bea88] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 19:37:07.950848  536851 system_pods.go:61] "kube-scheduler-cert-options-638768" [63d258f1-1fd2-4747-96be-33fd6261092d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 19:37:07.950853  536851 system_pods.go:61] "storage-provisioner" [ebfb087b-3a57-4275-aed5-09f270aa145c] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1027 19:37:07.950862  536851 system_pods.go:74] duration metric: took 9.987094ms to wait for pod list to return data ...
	I1027 19:37:07.950877  536851 kubeadm.go:586] duration metric: took 628.753884ms to wait for: map[apiserver:true system_pods:true]
	I1027 19:37:07.950892  536851 node_conditions.go:102] verifying NodePressure condition ...
	I1027 19:37:07.954114  536851 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1027 19:37:07.954155  536851 node_conditions.go:123] node cpu capacity is 8
	I1027 19:37:07.954172  536851 node_conditions.go:105] duration metric: took 3.276023ms to run NodePressure ...
	I1027 19:37:07.954187  536851 start.go:241] waiting for startup goroutines ...
	I1027 19:37:08.192338  536851 kapi.go:214] "coredns" deployment in "kube-system" namespace and "cert-options-638768" context rescaled to 1 replicas
	I1027 19:37:08.192378  536851 start.go:246] waiting for cluster config update ...
	I1027 19:37:08.192392  536851 start.go:255] writing updated cluster config ...
	I1027 19:37:08.192753  536851 ssh_runner.go:195] Run: rm -f paused
	I1027 19:37:08.267034  536851 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 19:37:08.268798  536851 out.go:179] * Done! kubectl is now configured to use "cert-options-638768" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.471800083Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.472866104Z" level=info msg="Conmon does support the --sync option"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.472893254Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.472916005Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.473793653Z" level=info msg="Conmon does support the --sync option"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.47381734Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.478584268Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.478625319Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.479439918Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.48004216Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.480412608Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.488226742Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.543951994Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-zw67w Namespace:kube-system ID:4720fc3e929dbb3031684a17cca7299c28e24c6e3a8b181e2e9f6a6233a24898 UID:0cb0e471-ebc9-46c0-b1fa-01239c268b53 NetNS:/var/run/netns/cd3fa7f4-947e-4ff8-9d5f-d09f18d4d27f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000508260}] Aliases:map[]}"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.544222707Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-zw67w for CNI network kindnet (type=ptp)"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.54486064Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.544895918Z" level=info msg="Starting seccomp notifier watcher"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.544969497Z" level=info msg="Create NRI interface"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.545086991Z" level=info msg="built-in NRI default validator is disabled"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.545105568Z" level=info msg="runtime interface created"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.545119087Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.545126958Z" level=info msg="runtime interface starting up..."
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.545157387Z" level=info msg="starting plugins..."
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.545176757Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 27 19:37:00 pause-249140 crio[2158]: time="2025-10-27T19:37:00.545664442Z" level=info msg="No systemd watchdog enabled"
	Oct 27 19:37:00 pause-249140 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	a5d43c06cdefd       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   16 seconds ago      Running             coredns                   0                   4720fc3e929db       coredns-66bc5c9577-zw67w               kube-system
	23504db13cbd1       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   27 seconds ago      Running             kube-proxy                0                   e067b230cbfc4       kube-proxy-brj24                       kube-system
	01bf760b3e7b2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   27 seconds ago      Running             kindnet-cni               0                   f9f82ba4d47d0       kindnet-8df8g                          kube-system
	d69010095e3eb       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   39 seconds ago      Running             kube-apiserver            0                   a30dd977a26cf       kube-apiserver-pause-249140            kube-system
	c5b2eb2a54f88       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   39 seconds ago      Running             kube-controller-manager   0                   db877865abb6e       kube-controller-manager-pause-249140   kube-system
	01712f1073762       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   39 seconds ago      Running             etcd                      0                   47807b3dfccd6       etcd-pause-249140                      kube-system
	e4828303cd2a9       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   39 seconds ago      Running             kube-scheduler            0                   26f4821c6f0ed       kube-scheduler-pause-249140            kube-system
	
	
	==> coredns [a5d43c06cdefd0fd790cb0418ec7193d78de34b9aa196d7434e89fa6e058a9e2] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39177 - 32461 "HINFO IN 1021597915146797741.8892895048223440577. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.475172268s
	
	
	==> describe nodes <==
	Name:               pause-249140
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-249140
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=pause-249140
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T19_36_37_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 19:36:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-249140
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 19:36:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 19:36:53 +0000   Mon, 27 Oct 2025 19:36:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 19:36:53 +0000   Mon, 27 Oct 2025 19:36:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 19:36:53 +0000   Mon, 27 Oct 2025 19:36:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 19:36:53 +0000   Mon, 27 Oct 2025 19:36:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-249140
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                f8695a9d-745c-4821-92ff-cf4a719b1310
	  Boot ID:                    811bd29c-e64e-4acc-9427-bab1f7caed93
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-zw67w                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-pause-249140                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         34s
	  kube-system                 kindnet-8df8g                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-pause-249140             250m (3%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-pause-249140    200m (2%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-brj24                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-pause-249140             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  NodeHasSufficientMemory  40s (x8 over 40s)  kubelet          Node pause-249140 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s (x8 over 40s)  kubelet          Node pause-249140 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s (x8 over 40s)  kubelet          Node pause-249140 status is now: NodeHasSufficientPID
	  Normal  Starting                 34s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s                kubelet          Node pause-249140 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s                kubelet          Node pause-249140 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s                kubelet          Node pause-249140 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s                node-controller  Node pause-249140 event: Registered Node pause-249140 in Controller
	  Normal  NodeReady                17s                kubelet          Node pause-249140 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 23 52 43 9a ba 08 06
	[  +0.000398] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 50 95 0e df 53 08 06
	[Oct27 18:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.017295] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023893] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023882] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023851] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +2.047849] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +4.031592] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +8.319143] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[ +16.382183] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[Oct27 19:00] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	
	
	==> etcd [01712f1073762c52020031153783123eaffdca1ca62a7f9798f8eee04cb57fd9] <==
	{"level":"info","ts":"2025-10-27T19:36:42.332181Z","caller":"traceutil/trace.go:172","msg":"trace[106228640] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kindnet; range_end:; response_count:1; response_revision:348; }","duration":"175.806535ms","start":"2025-10-27T19:36:42.156362Z","end":"2025-10-27T19:36:42.332169Z","steps":["trace[106228640] 'agreement among raft nodes before linearized reading'  (duration: 175.591423ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T19:36:42.332248Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"240.014754ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" limit:1 ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2025-10-27T19:36:42.332293Z","caller":"traceutil/trace.go:172","msg":"trace[54705406] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpointslice-controller; range_end:; response_count:1; response_revision:348; }","duration":"240.069912ms","start":"2025-10-27T19:36:42.092212Z","end":"2025-10-27T19:36:42.332282Z","steps":["trace[54705406] 'agreement among raft nodes before linearized reading'  (duration: 239.912915ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T19:36:42.332446Z","caller":"traceutil/trace.go:172","msg":"trace[634985258] transaction","detail":"{read_only:false; response_revision:352; number_of_response:1; }","duration":"272.873777ms","start":"2025-10-27T19:36:42.059560Z","end":"2025-10-27T19:36:42.332434Z","steps":["trace[634985258] 'process raft request'  (duration: 272.817808ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T19:36:42.332516Z","caller":"traceutil/trace.go:172","msg":"trace[1197883062] transaction","detail":"{read_only:false; response_revision:349; number_of_response:1; }","duration":"273.645267ms","start":"2025-10-27T19:36:42.058858Z","end":"2025-10-27T19:36:42.332503Z","steps":["trace[1197883062] 'process raft request'  (duration: 273.307157ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T19:36:42.332596Z","caller":"traceutil/trace.go:172","msg":"trace[1198813220] transaction","detail":"{read_only:false; response_revision:350; number_of_response:1; }","duration":"273.153879ms","start":"2025-10-27T19:36:42.059431Z","end":"2025-10-27T19:36:42.332585Z","steps":["trace[1198813220] 'process raft request'  (duration: 272.827679ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T19:36:42.332667Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"189.9559ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/clusterrole-aggregation-controller\" limit:1 ","response":"range_response_count:1 size:234"}
	{"level":"info","ts":"2025-10-27T19:36:42.332697Z","caller":"traceutil/trace.go:172","msg":"trace[546690537] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/clusterrole-aggregation-controller; range_end:; response_count:1; response_revision:351; }","duration":"189.990163ms","start":"2025-10-27T19:36:42.142696Z","end":"2025-10-27T19:36:42.332686Z","steps":["trace[546690537] 'agreement among raft nodes before linearized reading'  (duration: 189.678181ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T19:36:42.332843Z","caller":"traceutil/trace.go:172","msg":"trace[935065149] transaction","detail":"{read_only:false; response_revision:353; number_of_response:1; }","duration":"272.430122ms","start":"2025-10-27T19:36:42.060402Z","end":"2025-10-27T19:36:42.332832Z","steps":["trace[935065149] 'process raft request'  (duration: 272.010212ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T19:36:42.332889Z","caller":"traceutil/trace.go:172","msg":"trace[1698041888] transaction","detail":"{read_only:false; response_revision:351; number_of_response:1; }","duration":"273.346948ms","start":"2025-10-27T19:36:42.059532Z","end":"2025-10-27T19:36:42.332879Z","steps":["trace[1698041888] 'process raft request'  (duration: 272.778096ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T19:36:42.333084Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"273.007923ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/daemonsets/kube-system/kindnet\" limit:1 ","response":"range_response_count:1 size:4681"}
	{"level":"info","ts":"2025-10-27T19:36:42.333832Z","caller":"traceutil/trace.go:172","msg":"trace[1847432613] range","detail":"{range_begin:/registry/daemonsets/kube-system/kindnet; range_end:; response_count:1; response_revision:353; }","duration":"273.758168ms","start":"2025-10-27T19:36:42.060061Z","end":"2025-10-27T19:36:42.333819Z","steps":["trace[1847432613] 'agreement among raft nodes before linearized reading'  (duration: 272.944985ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T19:36:47.322837Z","caller":"traceutil/trace.go:172","msg":"trace[1298614446] transaction","detail":"{read_only:false; response_revision:414; number_of_response:1; }","duration":"136.562841ms","start":"2025-10-27T19:36:47.186247Z","end":"2025-10-27T19:36:47.322810Z","steps":["trace[1298614446] 'process raft request'  (duration: 92.920011ms)","trace[1298614446] 'compare'  (duration: 43.506639ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-27T19:36:47.592030Z","caller":"traceutil/trace.go:172","msg":"trace[1418362237] linearizableReadLoop","detail":"{readStateIndex:429; appliedIndex:429; }","duration":"212.342136ms","start":"2025-10-27T19:36:47.379660Z","end":"2025-10-27T19:36:47.592002Z","steps":["trace[1418362237] 'read index received'  (duration: 212.333094ms)","trace[1418362237] 'applied index is now lower than readState.Index'  (duration: 7.408µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T19:36:47.723522Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"343.834463ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T19:36:47.723591Z","caller":"traceutil/trace.go:172","msg":"trace[1991769037] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:414; }","duration":"343.922984ms","start":"2025-10-27T19:36:47.379653Z","end":"2025-10-27T19:36:47.723576Z","steps":["trace[1991769037] 'agreement among raft nodes before linearized reading'  (duration: 212.441254ms)","trace[1991769037] 'range keys from in-memory index tree'  (duration: 131.360383ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T19:36:47.724005Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.576021ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722596663596494565 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-249140\" mod_revision:414 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-249140\" value_size:4706 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-249140\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-27T19:36:47.724094Z","caller":"traceutil/trace.go:172","msg":"trace[270291271] transaction","detail":"{read_only:false; response_revision:415; number_of_response:1; }","duration":"392.527109ms","start":"2025-10-27T19:36:47.331552Z","end":"2025-10-27T19:36:47.724080Z","steps":["trace[270291271] 'process raft request'  (duration: 260.542241ms)","trace[270291271] 'compare'  (duration: 131.47758ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T19:36:47.724199Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-27T19:36:47.331528Z","time spent":"392.593578ms","remote":"127.0.0.1:38094","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4768,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-249140\" mod_revision:414 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-249140\" value_size:4706 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-249140\" > >"}
	{"level":"warn","ts":"2025-10-27T19:36:47.947822Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"117.943632ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-249140\" limit:1 ","response":"range_response_count:1 size:5582"}
	{"level":"info","ts":"2025-10-27T19:36:47.947900Z","caller":"traceutil/trace.go:172","msg":"trace[1277933230] range","detail":"{range_begin:/registry/minions/pause-249140; range_end:; response_count:1; response_revision:415; }","duration":"118.032403ms","start":"2025-10-27T19:36:47.829847Z","end":"2025-10-27T19:36:47.947880Z","steps":["trace[1277933230] 'range keys from in-memory index tree'  (duration: 117.779442ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T19:36:48.501694Z","caller":"traceutil/trace.go:172","msg":"trace[605054815] linearizableReadLoop","detail":"{readStateIndex:431; appliedIndex:431; }","duration":"121.866416ms","start":"2025-10-27T19:36:48.379798Z","end":"2025-10-27T19:36:48.501664Z","steps":["trace[605054815] 'read index received'  (duration: 121.854546ms)","trace[605054815] 'applied index is now lower than readState.Index'  (duration: 10.199µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T19:36:48.501828Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.00239ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T19:36:48.502253Z","caller":"traceutil/trace.go:172","msg":"trace[1292918405] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:416; }","duration":"122.439192ms","start":"2025-10-27T19:36:48.379792Z","end":"2025-10-27T19:36:48.502231Z","steps":["trace[1292918405] 'agreement among raft nodes before linearized reading'  (duration: 121.961713ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T19:36:48.502563Z","caller":"traceutil/trace.go:172","msg":"trace[97922063] transaction","detail":"{read_only:false; response_revision:417; number_of_response:1; }","duration":"148.011545ms","start":"2025-10-27T19:36:48.354509Z","end":"2025-10-27T19:36:48.502521Z","steps":["trace[97922063] 'process raft request'  (duration: 147.212732ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:37:10 up  2:19,  0 user,  load average: 6.25, 2.40, 1.46
	Linux pause-249140 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [01bf760b3e7b21a98d5df158a80b1c0b879013421d7c5e47ff7903915caf96a9] <==
	I1027 19:36:42.911807       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 19:36:42.912109       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1027 19:36:43.003297       1 main.go:148] setting mtu 1500 for CNI 
	I1027 19:36:43.003334       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 19:36:43.003361       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T19:36:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 19:36:43.211657       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 19:36:43.211708       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 19:36:43.211722       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 19:36:43.211876       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1027 19:36:43.511827       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 19:36:43.511993       1 metrics.go:72] Registering metrics
	I1027 19:36:43.512165       1 controller.go:711] "Syncing nftables rules"
	I1027 19:36:53.213841       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 19:36:53.213938       1 main.go:301] handling current node
	I1027 19:37:03.217870       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 19:37:03.217920       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d69010095e3eba77e809b777fa9e622cf5c9528a2eab5611100fa5eed6283461] <==
	I1027 19:36:34.120025       1 policy_source.go:240] refreshing policies
	E1027 19:36:34.143980       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1027 19:36:34.189686       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 19:36:34.195587       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 19:36:34.196458       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1027 19:36:34.206847       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 19:36:34.207413       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 19:36:34.282785       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 19:36:34.993182       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1027 19:36:34.998286       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1027 19:36:34.998310       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 19:36:35.633701       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 19:36:35.722973       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 19:36:35.898971       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1027 19:36:35.907112       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1027 19:36:35.908672       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 19:36:35.915678       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 19:36:36.046772       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 19:36:36.740740       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 19:36:36.755099       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1027 19:36:36.765000       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1027 19:36:41.550665       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 19:36:41.569428       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1027 19:36:41.630810       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 19:36:42.058708       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [c5b2eb2a54f889f17b3db8afb09c190f60784cb1f08c460017039d3d947aeaaf] <==
	I1027 19:36:41.444563       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 19:36:41.444576       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 19:36:41.444586       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1027 19:36:41.444982       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1027 19:36:41.445003       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 19:36:41.445110       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 19:36:41.445238       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-249140"
	I1027 19:36:41.445302       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1027 19:36:41.445341       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1027 19:36:41.445499       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 19:36:41.446083       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1027 19:36:41.446264       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 19:36:41.446584       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 19:36:41.448439       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 19:36:41.450735       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 19:36:41.453063       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 19:36:41.455632       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 19:36:41.457218       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 19:36:41.458451       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1027 19:36:41.458565       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1027 19:36:41.463751       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 19:36:41.464967       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1027 19:36:41.468348       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 19:36:41.567257       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-249140" podCIDRs=["10.244.0.0/24"]
	I1027 19:36:56.446880       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [23504db13cbd1fd12a985de0d72ca202ac317afa3c2b2e13010bc502e000e818] <==
	I1027 19:36:42.787018       1 server_linux.go:53] "Using iptables proxy"
	I1027 19:36:42.851653       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 19:36:42.952324       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 19:36:42.952377       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1027 19:36:42.952536       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 19:36:42.975884       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 19:36:42.975945       1 server_linux.go:132] "Using iptables Proxier"
	I1027 19:36:42.984014       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 19:36:42.984739       1 server.go:527] "Version info" version="v1.34.1"
	I1027 19:36:42.984768       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:36:42.988027       1 config.go:200] "Starting service config controller"
	I1027 19:36:42.988057       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 19:36:42.988248       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 19:36:42.988261       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 19:36:42.988311       1 config.go:106] "Starting endpoint slice config controller"
	I1027 19:36:42.988319       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 19:36:42.989012       1 config.go:309] "Starting node config controller"
	I1027 19:36:42.989035       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 19:36:43.089011       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 19:36:43.089166       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 19:36:43.089179       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 19:36:43.089174       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [e4828303cd2a90f2436dec99343b7ffa44a1eb586b82513fc0a7a01f1a37cd0d] <==
	E1027 19:36:34.046875       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 19:36:34.047178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 19:36:34.047438       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 19:36:34.047455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 19:36:34.047609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 19:36:34.047218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 19:36:34.047797       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1027 19:36:34.048172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 19:36:34.048765       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 19:36:34.048794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 19:36:34.915918       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 19:36:34.959961       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1027 19:36:34.994414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 19:36:35.002648       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 19:36:35.052729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 19:36:35.135439       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 19:36:35.163931       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 19:36:35.180758       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 19:36:35.181158       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 19:36:35.184734       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 19:36:35.192597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 19:36:35.230430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 19:36:35.235610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 19:36:35.481930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1027 19:36:37.443279       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 19:36:37 pause-249140 kubelet[1305]: E1027 19:36:37.713594    1305 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-249140\" already exists" pod="kube-system/kube-apiserver-pause-249140"
	Oct 27 19:36:37 pause-249140 kubelet[1305]: I1027 19:36:37.751875    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-249140" podStartSLOduration=1.751846647 podStartE2EDuration="1.751846647s" podCreationTimestamp="2025-10-27 19:36:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:36:37.736069803 +0000 UTC m=+1.207249199" watchObservedRunningTime="2025-10-27 19:36:37.751846647 +0000 UTC m=+1.223026043"
	Oct 27 19:36:37 pause-249140 kubelet[1305]: I1027 19:36:37.768714    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-249140" podStartSLOduration=1.768689063 podStartE2EDuration="1.768689063s" podCreationTimestamp="2025-10-27 19:36:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:36:37.752485093 +0000 UTC m=+1.223664505" watchObservedRunningTime="2025-10-27 19:36:37.768689063 +0000 UTC m=+1.239868458"
	Oct 27 19:36:37 pause-249140 kubelet[1305]: I1027 19:36:37.787865    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-249140" podStartSLOduration=2.787841485 podStartE2EDuration="2.787841485s" podCreationTimestamp="2025-10-27 19:36:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:36:37.76923326 +0000 UTC m=+1.240412656" watchObservedRunningTime="2025-10-27 19:36:37.787841485 +0000 UTC m=+1.259020882"
	Oct 27 19:36:37 pause-249140 kubelet[1305]: I1027 19:36:37.808662    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-249140" podStartSLOduration=1.808636937 podStartE2EDuration="1.808636937s" podCreationTimestamp="2025-10-27 19:36:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:36:37.788831215 +0000 UTC m=+1.260010612" watchObservedRunningTime="2025-10-27 19:36:37.808636937 +0000 UTC m=+1.279816332"
	Oct 27 19:36:41 pause-249140 kubelet[1305]: I1027 19:36:41.635229    1305 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 27 19:36:41 pause-249140 kubelet[1305]: I1027 19:36:41.636124    1305 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 27 19:36:42 pause-249140 kubelet[1305]: I1027 19:36:42.053087    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c60385cd-2c72-418e-b71f-de147e042619-cni-cfg\") pod \"kindnet-8df8g\" (UID: \"c60385cd-2c72-418e-b71f-de147e042619\") " pod="kube-system/kindnet-8df8g"
	Oct 27 19:36:42 pause-249140 kubelet[1305]: I1027 19:36:42.053155    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbfcp\" (UniqueName: \"kubernetes.io/projected/c60385cd-2c72-418e-b71f-de147e042619-kube-api-access-vbfcp\") pod \"kindnet-8df8g\" (UID: \"c60385cd-2c72-418e-b71f-de147e042619\") " pod="kube-system/kindnet-8df8g"
	Oct 27 19:36:42 pause-249140 kubelet[1305]: I1027 19:36:42.053192    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c60385cd-2c72-418e-b71f-de147e042619-lib-modules\") pod \"kindnet-8df8g\" (UID: \"c60385cd-2c72-418e-b71f-de147e042619\") " pod="kube-system/kindnet-8df8g"
	Oct 27 19:36:42 pause-249140 kubelet[1305]: I1027 19:36:42.053220    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c60385cd-2c72-418e-b71f-de147e042619-xtables-lock\") pod \"kindnet-8df8g\" (UID: \"c60385cd-2c72-418e-b71f-de147e042619\") " pod="kube-system/kindnet-8df8g"
	Oct 27 19:36:42 pause-249140 kubelet[1305]: I1027 19:36:42.154040    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/921a42f0-4e87-4a36-8436-4716703e03d7-lib-modules\") pod \"kube-proxy-brj24\" (UID: \"921a42f0-4e87-4a36-8436-4716703e03d7\") " pod="kube-system/kube-proxy-brj24"
	Oct 27 19:36:42 pause-249140 kubelet[1305]: I1027 19:36:42.154087    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/921a42f0-4e87-4a36-8436-4716703e03d7-xtables-lock\") pod \"kube-proxy-brj24\" (UID: \"921a42f0-4e87-4a36-8436-4716703e03d7\") " pod="kube-system/kube-proxy-brj24"
	Oct 27 19:36:42 pause-249140 kubelet[1305]: I1027 19:36:42.154106    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rbc4\" (UniqueName: \"kubernetes.io/projected/921a42f0-4e87-4a36-8436-4716703e03d7-kube-api-access-2rbc4\") pod \"kube-proxy-brj24\" (UID: \"921a42f0-4e87-4a36-8436-4716703e03d7\") " pod="kube-system/kube-proxy-brj24"
	Oct 27 19:36:42 pause-249140 kubelet[1305]: I1027 19:36:42.154335    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/921a42f0-4e87-4a36-8436-4716703e03d7-kube-proxy\") pod \"kube-proxy-brj24\" (UID: \"921a42f0-4e87-4a36-8436-4716703e03d7\") " pod="kube-system/kube-proxy-brj24"
	Oct 27 19:36:43 pause-249140 kubelet[1305]: I1027 19:36:43.753349    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-brj24" podStartSLOduration=2.753326209 podStartE2EDuration="2.753326209s" podCreationTimestamp="2025-10-27 19:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:36:43.753280019 +0000 UTC m=+7.224459416" watchObservedRunningTime="2025-10-27 19:36:43.753326209 +0000 UTC m=+7.224505605"
	Oct 27 19:36:43 pause-249140 kubelet[1305]: I1027 19:36:43.753747    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-8df8g" podStartSLOduration=2.753726037 podStartE2EDuration="2.753726037s" podCreationTimestamp="2025-10-27 19:36:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:36:43.741769809 +0000 UTC m=+7.212949206" watchObservedRunningTime="2025-10-27 19:36:43.753726037 +0000 UTC m=+7.224905434"
	Oct 27 19:36:53 pause-249140 kubelet[1305]: I1027 19:36:53.419738    1305 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 27 19:36:53 pause-249140 kubelet[1305]: I1027 19:36:53.535387    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0cb0e471-ebc9-46c0-b1fa-01239c268b53-config-volume\") pod \"coredns-66bc5c9577-zw67w\" (UID: \"0cb0e471-ebc9-46c0-b1fa-01239c268b53\") " pod="kube-system/coredns-66bc5c9577-zw67w"
	Oct 27 19:36:53 pause-249140 kubelet[1305]: I1027 19:36:53.535446    1305 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgj48\" (UniqueName: \"kubernetes.io/projected/0cb0e471-ebc9-46c0-b1fa-01239c268b53-kube-api-access-dgj48\") pod \"coredns-66bc5c9577-zw67w\" (UID: \"0cb0e471-ebc9-46c0-b1fa-01239c268b53\") " pod="kube-system/coredns-66bc5c9577-zw67w"
	Oct 27 19:36:54 pause-249140 kubelet[1305]: I1027 19:36:54.765851    1305 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zw67w" podStartSLOduration=12.765830866 podStartE2EDuration="12.765830866s" podCreationTimestamp="2025-10-27 19:36:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:36:54.765498309 +0000 UTC m=+18.236677708" watchObservedRunningTime="2025-10-27 19:36:54.765830866 +0000 UTC m=+18.237010273"
	Oct 27 19:37:04 pause-249140 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 19:37:04 pause-249140 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 19:37:04 pause-249140 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 27 19:37:04 pause-249140 systemd[1]: kubelet.service: Consumed 1.389s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-249140 -n pause-249140
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-249140 -n pause-249140: exit status 2 (393.441577ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-249140 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-468959 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-468959 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (601.621031ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:39:53Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-468959 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-468959 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-468959 describe deploy/metrics-server -n kube-system: exit status 1 (59.171379ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-468959 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-468959
helpers_test.go:243: (dbg) docker inspect old-k8s-version-468959:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2e0353db62d926cc83bef0d3fa107c768d6d452b830c383908ae17268301278e",
	        "Created": "2025-10-27T19:38:59.515462878Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 572444,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T19:38:59.810179738Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/2e0353db62d926cc83bef0d3fa107c768d6d452b830c383908ae17268301278e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2e0353db62d926cc83bef0d3fa107c768d6d452b830c383908ae17268301278e/hostname",
	        "HostsPath": "/var/lib/docker/containers/2e0353db62d926cc83bef0d3fa107c768d6d452b830c383908ae17268301278e/hosts",
	        "LogPath": "/var/lib/docker/containers/2e0353db62d926cc83bef0d3fa107c768d6d452b830c383908ae17268301278e/2e0353db62d926cc83bef0d3fa107c768d6d452b830c383908ae17268301278e-json.log",
	        "Name": "/old-k8s-version-468959",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-468959:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-468959",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2e0353db62d926cc83bef0d3fa107c768d6d452b830c383908ae17268301278e",
	                "LowerDir": "/var/lib/docker/overlay2/ce8ba90743d105752eb907923a1422d963b8a7959aac8ff55c461d4eb853b209-init/diff:/var/lib/docker/overlay2/71b61ec94610a35f2d924dec358052d4c154c36b3fe219802f60246ca2dc7f45/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ce8ba90743d105752eb907923a1422d963b8a7959aac8ff55c461d4eb853b209/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ce8ba90743d105752eb907923a1422d963b8a7959aac8ff55c461d4eb853b209/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ce8ba90743d105752eb907923a1422d963b8a7959aac8ff55c461d4eb853b209/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-468959",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-468959/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-468959",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-468959",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-468959",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "79955ca7fc49f4dbdf3d441ff76684c1746dee1092be12e7c1f9899d06de4c22",
	            "SandboxKey": "/var/run/docker/netns/79955ca7fc49",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33415"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33416"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33419"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33417"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33418"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-468959": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:a5:ad:fd:7d:f3",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0308d3f30614fde66189d573d65372f0d31056c699858ced2c5f17d155a2bb0c",
	                    "EndpointID": "0a9bc621e59fffb5fa3335cf29f188dc0decdc66eaf4cc0ee48c7a7a02363c80",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-468959",
	                        "2e0353db62d9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-468959 -n old-k8s-version-468959
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-468959 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-468959 logs -n 25: (1.351678009s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-387383 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                     │ cilium-387383          │ jenkins │ v1.37.0 │ 27 Oct 25 19:38 UTC │                     │
	│ ssh     │ -p cilium-387383 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                     │ cilium-387383          │ jenkins │ v1.37.0 │ 27 Oct 25 19:38 UTC │                     │
	│ ssh     │ -p cilium-387383 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-387383          │ jenkins │ v1.37.0 │ 27 Oct 25 19:38 UTC │                     │
	│ ssh     │ -p cilium-387383 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-387383          │ jenkins │ v1.37.0 │ 27 Oct 25 19:38 UTC │                     │
	│ ssh     │ -p cilium-387383 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-387383          │ jenkins │ v1.37.0 │ 27 Oct 25 19:38 UTC │                     │
	│ ssh     │ -p cilium-387383 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-387383          │ jenkins │ v1.37.0 │ 27 Oct 25 19:38 UTC │                     │
	│ ssh     │ -p cilium-387383 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-387383          │ jenkins │ v1.37.0 │ 27 Oct 25 19:38 UTC │                     │
	│ ssh     │ -p cilium-387383 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-387383          │ jenkins │ v1.37.0 │ 27 Oct 25 19:38 UTC │                     │
	│ ssh     │ -p cilium-387383 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-387383          │ jenkins │ v1.37.0 │ 27 Oct 25 19:38 UTC │                     │
	│ ssh     │ -p cilium-387383 sudo containerd config dump                                                                                                                                                                                                  │ cilium-387383          │ jenkins │ v1.37.0 │ 27 Oct 25 19:38 UTC │                     │
	│ ssh     │ -p cilium-387383 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-387383          │ jenkins │ v1.37.0 │ 27 Oct 25 19:38 UTC │                     │
	│ ssh     │ -p cilium-387383 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-387383          │ jenkins │ v1.37.0 │ 27 Oct 25 19:38 UTC │                     │
	│ ssh     │ -p cilium-387383 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-387383          │ jenkins │ v1.37.0 │ 27 Oct 25 19:38 UTC │                     │
	│ ssh     │ -p cilium-387383 sudo crio config                                                                                                                                                                                                             │ cilium-387383          │ jenkins │ v1.37.0 │ 27 Oct 25 19:38 UTC │                     │
	│ delete  │ -p cilium-387383                                                                                                                                                                                                                              │ cilium-387383          │ jenkins │ v1.37.0 │ 27 Oct 25 19:38 UTC │ 27 Oct 25 19:38 UTC │
	│ start   │ -p old-k8s-version-468959 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-468959 │ jenkins │ v1.37.0 │ 27 Oct 25 19:38 UTC │ 27 Oct 25 19:39 UTC │
	│ delete  │ -p NoKubernetes-668991                                                                                                                                                                                                                        │ NoKubernetes-668991    │ jenkins │ v1.37.0 │ 27 Oct 25 19:38 UTC │ 27 Oct 25 19:39 UTC │
	│ start   │ -p NoKubernetes-668991 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-668991    │ jenkins │ v1.37.0 │ 27 Oct 25 19:39 UTC │ 27 Oct 25 19:39 UTC │
	│ ssh     │ -p NoKubernetes-668991 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-668991    │ jenkins │ v1.37.0 │ 27 Oct 25 19:39 UTC │                     │
	│ stop    │ -p NoKubernetes-668991                                                                                                                                                                                                                        │ NoKubernetes-668991    │ jenkins │ v1.37.0 │ 27 Oct 25 19:39 UTC │ 27 Oct 25 19:39 UTC │
	│ start   │ -p NoKubernetes-668991 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-668991    │ jenkins │ v1.37.0 │ 27 Oct 25 19:39 UTC │ 27 Oct 25 19:39 UTC │
	│ ssh     │ -p NoKubernetes-668991 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-668991    │ jenkins │ v1.37.0 │ 27 Oct 25 19:39 UTC │                     │
	│ delete  │ -p NoKubernetes-668991                                                                                                                                                                                                                        │ NoKubernetes-668991    │ jenkins │ v1.37.0 │ 27 Oct 25 19:39 UTC │ 27 Oct 25 19:39 UTC │
	│ start   │ -p embed-certs-919237 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-919237     │ jenkins │ v1.37.0 │ 27 Oct 25 19:39 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-468959 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-468959 │ jenkins │ v1.37.0 │ 27 Oct 25 19:39 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 19:39:49
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 19:39:49.850567  579549 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:39:49.850822  579549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:39:49.850848  579549 out.go:374] Setting ErrFile to fd 2...
	I1027 19:39:49.850858  579549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:39:49.851066  579549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 19:39:49.851601  579549 out.go:368] Setting JSON to false
	I1027 19:39:49.852848  579549 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8539,"bootTime":1761585451,"procs":345,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 19:39:49.852955  579549 start.go:141] virtualization: kvm guest
	I1027 19:39:49.855065  579549 out.go:179] * [embed-certs-919237] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 19:39:49.856511  579549 notify.go:220] Checking for updates...
	I1027 19:39:49.856526  579549 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 19:39:49.857927  579549 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 19:39:49.859291  579549 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:39:49.860593  579549 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-352833/.minikube
	I1027 19:39:49.862057  579549 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 19:39:49.863392  579549 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 19:39:49.865302  579549 config.go:182] Loaded profile config "cert-expiration-368442": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:39:49.865410  579549 config.go:182] Loaded profile config "kubernetes-upgrade-360986": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:39:49.865485  579549 config.go:182] Loaded profile config "old-k8s-version-468959": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1027 19:39:49.865564  579549 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 19:39:49.889512  579549 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 19:39:49.889595  579549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:39:49.949084  579549 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-27 19:39:49.938268623 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:39:49.949274  579549 docker.go:318] overlay module found
	I1027 19:39:49.951037  579549 out.go:179] * Using the docker driver based on user configuration
	I1027 19:39:49.952253  579549 start.go:305] selected driver: docker
	I1027 19:39:49.952269  579549 start.go:925] validating driver "docker" against <nil>
	I1027 19:39:49.952282  579549 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 19:39:49.952874  579549 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:39:50.011772  579549 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-27 19:39:50.001559505 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:39:50.011978  579549 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1027 19:39:50.012264  579549 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 19:39:50.014157  579549 out.go:179] * Using Docker driver with root privileges
	I1027 19:39:50.015540  579549 cni.go:84] Creating CNI manager for ""
	I1027 19:39:50.015623  579549 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:39:50.015641  579549 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 19:39:50.015730  579549 start.go:349] cluster config:
	{Name:embed-certs-919237 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-919237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:39:50.016996  579549 out.go:179] * Starting "embed-certs-919237" primary control-plane node in "embed-certs-919237" cluster
	I1027 19:39:50.018106  579549 cache.go:123] Beginning downloading kic base image for docker with crio
	I1027 19:39:50.019365  579549 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 19:39:50.020556  579549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:39:50.020592  579549 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1027 19:39:50.020607  579549 cache.go:58] Caching tarball of preloaded images
	I1027 19:39:50.020669  579549 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 19:39:50.020711  579549 preload.go:233] Found /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1027 19:39:50.020726  579549 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 19:39:50.020860  579549 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/embed-certs-919237/config.json ...
	I1027 19:39:50.020889  579549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/embed-certs-919237/config.json: {Name:mkff0c5f707edc963440902fc30473affde2705f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:39:50.044049  579549 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 19:39:50.044077  579549 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 19:39:50.044099  579549 cache.go:232] Successfully downloaded all kic artifacts
	I1027 19:39:50.044127  579549 start.go:360] acquireMachinesLock for embed-certs-919237: {Name:mka6dd5e9788015cfc40a76e0480af6167e6c17e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:39:50.044297  579549 start.go:364] duration metric: took 103.864µs to acquireMachinesLock for "embed-certs-919237"
	I1027 19:39:50.044336  579549 start.go:93] Provisioning new machine with config: &{Name:embed-certs-919237 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-919237 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 19:39:50.044429  579549 start.go:125] createHost starting for "" (driver="docker")
	I1027 19:39:46.569339  565798 cri.go:89] found id: "8c542d33456b42b425e20ab888c9445c3929c6d957bb1a7772efde1a82b6999e"
	I1027 19:39:46.569366  565798 cri.go:89] found id: ""
	I1027 19:39:46.569379  565798 logs.go:282] 1 containers: [8c542d33456b42b425e20ab888c9445c3929c6d957bb1a7772efde1a82b6999e]
	I1027 19:39:46.569440  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:39:46.573834  565798 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:39:46.573891  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:39:46.605014  565798 cri.go:89] found id: ""
	I1027 19:39:46.605042  565798 logs.go:282] 0 containers: []
	W1027 19:39:46.605052  565798 logs.go:284] No container was found matching "kindnet"
	I1027 19:39:46.605060  565798 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:39:46.605123  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:39:46.635415  565798 cri.go:89] found id: ""
	I1027 19:39:46.635447  565798 logs.go:282] 0 containers: []
	W1027 19:39:46.635460  565798 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:39:46.635479  565798 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:39:46.635493  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:39:46.682145  565798 logs.go:123] Gathering logs for container status ...
	I1027 19:39:46.682193  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:39:46.726523  565798 logs.go:123] Gathering logs for dmesg ...
	I1027 19:39:46.726554  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:39:46.751281  565798 logs.go:123] Gathering logs for kubelet ...
	I1027 19:39:46.751310  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:39:46.822077  565798 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:39:46.822118  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1027 19:39:50.046364  579549 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1027 19:39:50.046646  579549 start.go:159] libmachine.API.Create for "embed-certs-919237" (driver="docker")
	I1027 19:39:50.046682  579549 client.go:168] LocalClient.Create starting
	I1027 19:39:50.046756  579549 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem
	I1027 19:39:50.046798  579549 main.go:141] libmachine: Decoding PEM data...
	I1027 19:39:50.046828  579549 main.go:141] libmachine: Parsing certificate...
	I1027 19:39:50.046916  579549 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem
	I1027 19:39:50.046955  579549 main.go:141] libmachine: Decoding PEM data...
	I1027 19:39:50.046990  579549 main.go:141] libmachine: Parsing certificate...
	I1027 19:39:50.047460  579549 cli_runner.go:164] Run: docker network inspect embed-certs-919237 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 19:39:50.065238  579549 cli_runner.go:211] docker network inspect embed-certs-919237 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 19:39:50.065324  579549 network_create.go:284] running [docker network inspect embed-certs-919237] to gather additional debugging logs...
	I1027 19:39:50.065343  579549 cli_runner.go:164] Run: docker network inspect embed-certs-919237
	W1027 19:39:50.082160  579549 cli_runner.go:211] docker network inspect embed-certs-919237 returned with exit code 1
	I1027 19:39:50.082197  579549 network_create.go:287] error running [docker network inspect embed-certs-919237]: docker network inspect embed-certs-919237: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-919237 not found
	I1027 19:39:50.082220  579549 network_create.go:289] output of [docker network inspect embed-certs-919237]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-919237 not found
	
	** /stderr **
	I1027 19:39:50.082383  579549 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 19:39:50.100883  579549 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-04e197bde7e8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:8c:cb:7c:68:31} reservation:<nil>}
	I1027 19:39:50.101709  579549 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e37fd2b092bc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:26:98:e3:c0:d9:8a} reservation:<nil>}
	I1027 19:39:50.102221  579549 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-bbd9ae70d20d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ea:7f:4f:eb:e4:a1} reservation:<nil>}
	I1027 19:39:50.102773  579549 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-42f9931f7dd9 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:6a:e5:77:b6:bd:34} reservation:<nil>}
	I1027 19:39:50.103449  579549 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-0308d3f30614 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:52:30:ec:8b:8f:bb} reservation:<nil>}
	I1027 19:39:50.104348  579549 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002036a50}
	I1027 19:39:50.104386  579549 network_create.go:124] attempt to create docker network embed-certs-919237 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1027 19:39:50.104437  579549 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-919237 embed-certs-919237
	I1027 19:39:50.163394  579549 network_create.go:108] docker network embed-certs-919237 192.168.94.0/24 created
	I1027 19:39:50.163426  579549 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-919237" container
	I1027 19:39:50.163508  579549 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 19:39:50.182323  579549 cli_runner.go:164] Run: docker volume create embed-certs-919237 --label name.minikube.sigs.k8s.io=embed-certs-919237 --label created_by.minikube.sigs.k8s.io=true
	I1027 19:39:50.201943  579549 oci.go:103] Successfully created a docker volume embed-certs-919237
	I1027 19:39:50.202037  579549 cli_runner.go:164] Run: docker run --rm --name embed-certs-919237-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-919237 --entrypoint /usr/bin/test -v embed-certs-919237:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 19:39:50.599029  579549 oci.go:107] Successfully prepared a docker volume embed-certs-919237
	I1027 19:39:50.599079  579549 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:39:50.599110  579549 kic.go:194] Starting extracting preloaded images to volume ...
	I1027 19:39:50.599207  579549 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-919237:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Oct 27 19:39:41 old-k8s-version-468959 crio[775]: time="2025-10-27T19:39:41.59455226Z" level=info msg="Starting container: e49e000105fa57613009dd691c6dac16b0ea8cc3e4a0c5cc9432592e1627d358" id=d31f7393-433b-46a8-a89e-1de9d4ff7544 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:39:41 old-k8s-version-468959 crio[775]: time="2025-10-27T19:39:41.596896803Z" level=info msg="Started container" PID=2216 containerID=e49e000105fa57613009dd691c6dac16b0ea8cc3e4a0c5cc9432592e1627d358 description=kube-system/coredns-5dd5756b68-xwmdt/coredns id=d31f7393-433b-46a8-a89e-1de9d4ff7544 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6be82d2b918cdee6f2533d37ee67c5e2e9cc641609062b39f411bfddad2f28c6
	Oct 27 19:39:44 old-k8s-version-468959 crio[775]: time="2025-10-27T19:39:44.864379171Z" level=info msg="Running pod sandbox: default/busybox/POD" id=a83ed34b-8072-44ee-90e0-fe22a52922b1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 19:39:44 old-k8s-version-468959 crio[775]: time="2025-10-27T19:39:44.864493574Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:39:44 old-k8s-version-468959 crio[775]: time="2025-10-27T19:39:44.870598793Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:bd84f28619c1c29e71169ebd16c15e9bb360b1bb57f393108611c46a56d2ada2 UID:46c055c9-34d3-4bb1-9d46-10ffe110ed16 NetNS:/var/run/netns/8f8f80cb-3d59-4064-bf3c-d5e29c398a61 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00096ee38}] Aliases:map[]}"
	Oct 27 19:39:44 old-k8s-version-468959 crio[775]: time="2025-10-27T19:39:44.870634941Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 27 19:39:44 old-k8s-version-468959 crio[775]: time="2025-10-27T19:39:44.880950852Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:bd84f28619c1c29e71169ebd16c15e9bb360b1bb57f393108611c46a56d2ada2 UID:46c055c9-34d3-4bb1-9d46-10ffe110ed16 NetNS:/var/run/netns/8f8f80cb-3d59-4064-bf3c-d5e29c398a61 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00096ee38}] Aliases:map[]}"
	Oct 27 19:39:44 old-k8s-version-468959 crio[775]: time="2025-10-27T19:39:44.88112561Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 27 19:39:44 old-k8s-version-468959 crio[775]: time="2025-10-27T19:39:44.882012322Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 27 19:39:44 old-k8s-version-468959 crio[775]: time="2025-10-27T19:39:44.882874191Z" level=info msg="Ran pod sandbox bd84f28619c1c29e71169ebd16c15e9bb360b1bb57f393108611c46a56d2ada2 with infra container: default/busybox/POD" id=a83ed34b-8072-44ee-90e0-fe22a52922b1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 19:39:44 old-k8s-version-468959 crio[775]: time="2025-10-27T19:39:44.8841859Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=00a31e6d-8b05-43a4-abda-b41e4a2769e0 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:39:44 old-k8s-version-468959 crio[775]: time="2025-10-27T19:39:44.884309685Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=00a31e6d-8b05-43a4-abda-b41e4a2769e0 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:39:44 old-k8s-version-468959 crio[775]: time="2025-10-27T19:39:44.884343769Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=00a31e6d-8b05-43a4-abda-b41e4a2769e0 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:39:44 old-k8s-version-468959 crio[775]: time="2025-10-27T19:39:44.884865245Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d68cc8b6-2237-4dbb-9bdf-1aaa5f0248d2 name=/runtime.v1.ImageService/PullImage
	Oct 27 19:39:44 old-k8s-version-468959 crio[775]: time="2025-10-27T19:39:44.88628897Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 27 19:39:45 old-k8s-version-468959 crio[775]: time="2025-10-27T19:39:45.783985807Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=d68cc8b6-2237-4dbb-9bdf-1aaa5f0248d2 name=/runtime.v1.ImageService/PullImage
	Oct 27 19:39:45 old-k8s-version-468959 crio[775]: time="2025-10-27T19:39:45.785007255Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=db2bf177-8971-491a-81fb-059c954abb34 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:39:45 old-k8s-version-468959 crio[775]: time="2025-10-27T19:39:45.786576383Z" level=info msg="Creating container: default/busybox/busybox" id=ed98bb26-c8c1-41bd-8654-f9e6decd590c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:39:45 old-k8s-version-468959 crio[775]: time="2025-10-27T19:39:45.786745606Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:39:45 old-k8s-version-468959 crio[775]: time="2025-10-27T19:39:45.791397292Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:39:45 old-k8s-version-468959 crio[775]: time="2025-10-27T19:39:45.791879832Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:39:45 old-k8s-version-468959 crio[775]: time="2025-10-27T19:39:45.815924339Z" level=info msg="Created container 64f4be0c56cc20ac3874b76dce0396b71c5fd01202bf023b8b079e7e10c33ed0: default/busybox/busybox" id=ed98bb26-c8c1-41bd-8654-f9e6decd590c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:39:45 old-k8s-version-468959 crio[775]: time="2025-10-27T19:39:45.81666908Z" level=info msg="Starting container: 64f4be0c56cc20ac3874b76dce0396b71c5fd01202bf023b8b079e7e10c33ed0" id=125ad0bd-ea03-448d-b5ed-2af7cd7d21be name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:39:45 old-k8s-version-468959 crio[775]: time="2025-10-27T19:39:45.818565619Z" level=info msg="Started container" PID=2293 containerID=64f4be0c56cc20ac3874b76dce0396b71c5fd01202bf023b8b079e7e10c33ed0 description=default/busybox/busybox id=125ad0bd-ea03-448d-b5ed-2af7cd7d21be name=/runtime.v1.RuntimeService/StartContainer sandboxID=bd84f28619c1c29e71169ebd16c15e9bb360b1bb57f393108611c46a56d2ada2
	Oct 27 19:39:53 old-k8s-version-468959 crio[775]: time="2025-10-27T19:39:53.645229917Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	64f4be0c56cc2       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   9 seconds ago       Running             busybox                   0                   bd84f28619c1c       busybox                                          default
	e49e000105fa5       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 seconds ago      Running             coredns                   0                   6be82d2b918cd       coredns-5dd5756b68-xwmdt                         kube-system
	dcabac759190c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   4877068f35ed9       storage-provisioner                              kube-system
	8ce2b76090a64       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    25 seconds ago      Running             kindnet-cni               0                   dc730dee0d691       kindnet-td5zb                                    kube-system
	f091d8dd47e83       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      27 seconds ago      Running             kube-proxy                0                   ebd6649bfde4b       kube-proxy-tjbth                                 kube-system
	956af9efb4d4c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      45 seconds ago      Running             etcd                      0                   e02e0610db6fa       etcd-old-k8s-version-468959                      kube-system
	556269b05c116       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      45 seconds ago      Running             kube-controller-manager   0                   e167db1af0a88       kube-controller-manager-old-k8s-version-468959   kube-system
	94d728b90fb03       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      45 seconds ago      Running             kube-scheduler            0                   fa978dabd772f       kube-scheduler-old-k8s-version-468959            kube-system
	bec026268a5b9       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      45 seconds ago      Running             kube-apiserver            0                   418465850e6fc       kube-apiserver-old-k8s-version-468959            kube-system
	
	
	==> coredns [e49e000105fa57613009dd691c6dac16b0ea8cc3e4a0c5cc9432592e1627d358] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:47060 - 19387 "HINFO IN 8539340772085652302.8728876708389642021. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.50578783s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-468959
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-468959
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=old-k8s-version-468959
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T19_39_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 19:39:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-468959
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 19:39:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 19:39:46 +0000   Mon, 27 Oct 2025 19:39:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 19:39:46 +0000   Mon, 27 Oct 2025 19:39:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 19:39:46 +0000   Mon, 27 Oct 2025 19:39:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 19:39:46 +0000   Mon, 27 Oct 2025 19:39:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-468959
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                2befee2f-4a53-4846-b84d-35620b9685cc
	  Boot ID:                    811bd29c-e64e-4acc-9427-bab1f7caed93
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-xwmdt                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-old-k8s-version-468959                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         40s
	  kube-system                 kindnet-td5zb                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-old-k8s-version-468959             250m (3%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-controller-manager-old-k8s-version-468959    200m (2%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-tjbth                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-old-k8s-version-468959             100m (1%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27s   kube-proxy       
	  Normal  Starting                 40s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s   kubelet          Node old-k8s-version-468959 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s   kubelet          Node old-k8s-version-468959 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s   kubelet          Node old-k8s-version-468959 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node old-k8s-version-468959 event: Registered Node old-k8s-version-468959 in Controller
	  Normal  NodeReady                14s   kubelet          Node old-k8s-version-468959 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 23 52 43 9a ba 08 06
	[  +0.000398] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 50 95 0e df 53 08 06
	[Oct27 18:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.017295] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023893] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023882] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023851] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +2.047849] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +4.031592] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +8.319143] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[ +16.382183] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[Oct27 19:00] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	
	
	==> etcd [956af9efb4d4cf571ede4dfa492cc422d91b4e3f44b9e86c8788e1eea852aeaf] <==
	{"level":"info","ts":"2025-10-27T19:39:10.243498Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-27T19:39:10.243531Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-27T19:39:11.231792Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-27T19:39:11.231849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-27T19:39:11.231893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-10-27T19:39:11.231914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-10-27T19:39:11.231923Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-27T19:39:11.231935Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-10-27T19:39:11.231948Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-27T19:39:11.232955Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-468959 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-27T19:39:11.232971Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-27T19:39:11.233048Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-27T19:39:11.233047Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-27T19:39:11.233213Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-27T19:39:11.233301Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-27T19:39:11.234186Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-27T19:39:11.234412Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-27T19:39:11.234466Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-27T19:39:11.235392Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-27T19:39:11.235494Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"warn","ts":"2025-10-27T19:39:53.933095Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.527804ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.85.2\" ","response":"range_response_count:1 size:131"}
	{"level":"warn","ts":"2025-10-27T19:39:53.933164Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"212.167078ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T19:39:53.933213Z","caller":"traceutil/trace.go:171","msg":"trace[1324548919] range","detail":"{range_begin:/registry/masterleases/192.168.85.2; range_end:; response_count:1; response_revision:428; }","duration":"129.704929ms","start":"2025-10-27T19:39:53.803489Z","end":"2025-10-27T19:39:53.933194Z","steps":["trace[1324548919] 'range keys from in-memory index tree'  (duration: 129.406226ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T19:39:53.933218Z","caller":"traceutil/trace.go:171","msg":"trace[1624202283] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:428; }","duration":"212.243477ms","start":"2025-10-27T19:39:53.72096Z","end":"2025-10-27T19:39:53.933204Z","steps":["trace[1624202283] 'range keys from in-memory index tree'  (duration: 212.071757ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T19:39:54.153362Z","caller":"traceutil/trace.go:171","msg":"trace[1638837066] transaction","detail":"{read_only:false; response_revision:429; number_of_response:1; }","duration":"125.300515ms","start":"2025-10-27T19:39:54.028042Z","end":"2025-10-27T19:39:54.153342Z","steps":["trace[1638837066] 'process raft request'  (duration: 124.596324ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:39:55 up  2:22,  0 user,  load average: 2.96, 3.12, 1.94
	Linux old-k8s-version-468959 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8ce2b76090a643a921d0cc358f8980ab1398d8283da8bc542daa7e436512f483] <==
	I1027 19:39:30.460254       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 19:39:30.460501       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1027 19:39:30.460692       1 main.go:148] setting mtu 1500 for CNI 
	I1027 19:39:30.460710       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 19:39:30.460737       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T19:39:30Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 19:39:30.663401       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 19:39:30.663435       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 19:39:30.663447       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 19:39:30.663595       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1027 19:39:31.063912       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 19:39:31.063944       1 metrics.go:72] Registering metrics
	I1027 19:39:31.064018       1 controller.go:711] "Syncing nftables rules"
	I1027 19:39:40.664988       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 19:39:40.665039       1 main.go:301] handling current node
	I1027 19:39:50.666230       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 19:39:50.666290       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bec026268a5b96a40066b07241786aae12db82e390cbe9e29c29ccfd67132eb3] <==
	I1027 19:39:12.296995       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1027 19:39:12.297025       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1027 19:39:12.297770       1 shared_informer.go:318] Caches are synced for configmaps
	I1027 19:39:12.297789       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1027 19:39:12.297923       1 aggregator.go:166] initial CRD sync complete...
	I1027 19:39:12.297934       1 autoregister_controller.go:141] Starting autoregister controller
	I1027 19:39:12.297940       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 19:39:12.297947       1 cache.go:39] Caches are synced for autoregister controller
	I1027 19:39:12.298180       1 controller.go:624] quota admission added evaluator for: namespaces
	I1027 19:39:12.483910       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 19:39:13.202109       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1027 19:39:13.205903       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1027 19:39:13.205926       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 19:39:13.685205       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 19:39:13.725860       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 19:39:13.807663       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1027 19:39:13.814375       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1027 19:39:13.815513       1 controller.go:624] quota admission added evaluator for: endpoints
	I1027 19:39:13.820097       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 19:39:14.261835       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1027 19:39:15.416497       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1027 19:39:15.428985       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1027 19:39:15.439805       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1027 19:39:27.923069       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1027 19:39:27.932456       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [556269b05c1169f9806983bd5eef91d8927b90be0c775b87d8fb01a768452e09] <==
	I1027 19:39:27.962758       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I1027 19:39:27.965404       1 shared_informer.go:318] Caches are synced for persistent volume
	I1027 19:39:27.969121       1 shared_informer.go:318] Caches are synced for namespace
	I1027 19:39:27.971575       1 shared_informer.go:318] Caches are synced for resource quota
	I1027 19:39:28.012237       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I1027 19:39:28.024455       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-rb45m"
	I1027 19:39:28.030347       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-xwmdt"
	I1027 19:39:28.036945       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="74.092424ms"
	I1027 19:39:28.043607       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.60435ms"
	I1027 19:39:28.043738       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="86.652µs"
	I1027 19:39:28.045187       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="116.464µs"
	I1027 19:39:28.384273       1 shared_informer.go:318] Caches are synced for garbage collector
	I1027 19:39:28.469570       1 shared_informer.go:318] Caches are synced for garbage collector
	I1027 19:39:28.469618       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1027 19:39:28.850002       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1027 19:39:28.858682       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-rb45m"
	I1027 19:39:28.865362       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="15.632775ms"
	I1027 19:39:28.873410       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.915889ms"
	I1027 19:39:28.873555       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="89.876µs"
	I1027 19:39:41.221308       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="227.954µs"
	I1027 19:39:41.242480       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="725.73µs"
	I1027 19:39:42.595739       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="147.508µs"
	I1027 19:39:42.628992       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.722032ms"
	I1027 19:39:42.629113       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="79.986µs"
	I1027 19:39:42.957725       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [f091d8dd47e836c6e84876886695aa2371b4ca0f81227c602faa773c28e80ee0] <==
	I1027 19:39:28.354156       1 server_others.go:69] "Using iptables proxy"
	I1027 19:39:28.363808       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1027 19:39:28.384171       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 19:39:28.387429       1 server_others.go:152] "Using iptables Proxier"
	I1027 19:39:28.387471       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1027 19:39:28.387479       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1027 19:39:28.387523       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1027 19:39:28.387850       1 server.go:846] "Version info" version="v1.28.0"
	I1027 19:39:28.387874       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:39:28.388647       1 config.go:188] "Starting service config controller"
	I1027 19:39:28.388699       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1027 19:39:28.388697       1 config.go:97] "Starting endpoint slice config controller"
	I1027 19:39:28.388728       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1027 19:39:28.388735       1 config.go:315] "Starting node config controller"
	I1027 19:39:28.388764       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1027 19:39:28.489396       1 shared_informer.go:318] Caches are synced for service config
	I1027 19:39:28.490973       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1027 19:39:28.491337       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [94d728b90fb031ccfbd12b1654a8695f934d5c1ce8b96108061d7742cde0b913] <==
	W1027 19:39:12.279650       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1027 19:39:12.279677       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1027 19:39:12.279686       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1027 19:39:12.279703       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1027 19:39:12.279426       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1027 19:39:12.279722       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1027 19:39:12.279528       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1027 19:39:12.279747       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1027 19:39:12.279792       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1027 19:39:12.279812       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1027 19:39:13.271316       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1027 19:39:13.271356       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1027 19:39:13.296642       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1027 19:39:13.296673       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1027 19:39:13.325059       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1027 19:39:13.325098       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1027 19:39:13.352566       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1027 19:39:13.352609       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1027 19:39:13.428417       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1027 19:39:13.428445       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1027 19:39:13.459178       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1027 19:39:13.459223       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1027 19:39:13.480902       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1027 19:39:13.480940       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I1027 19:39:13.876381       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 27 19:39:27 old-k8s-version-468959 kubelet[1413]: I1027 19:39:27.872301    1413 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 27 19:39:27 old-k8s-version-468959 kubelet[1413]: I1027 19:39:27.958671    1413 topology_manager.go:215] "Topology Admit Handler" podUID="834a476e-f5a7-4d1d-b8c6-43c163997c55" podNamespace="kube-system" podName="kube-proxy-tjbth"
	Oct 27 19:39:27 old-k8s-version-468959 kubelet[1413]: I1027 19:39:27.959567    1413 topology_manager.go:215] "Topology Admit Handler" podUID="c5669cde-bf50-4064-83c2-f5b82b3a2813" podNamespace="kube-system" podName="kindnet-td5zb"
	Oct 27 19:39:28 old-k8s-version-468959 kubelet[1413]: I1027 19:39:28.038432    1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c5669cde-bf50-4064-83c2-f5b82b3a2813-xtables-lock\") pod \"kindnet-td5zb\" (UID: \"c5669cde-bf50-4064-83c2-f5b82b3a2813\") " pod="kube-system/kindnet-td5zb"
	Oct 27 19:39:28 old-k8s-version-468959 kubelet[1413]: I1027 19:39:28.038498    1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/834a476e-f5a7-4d1d-b8c6-43c163997c55-kube-proxy\") pod \"kube-proxy-tjbth\" (UID: \"834a476e-f5a7-4d1d-b8c6-43c163997c55\") " pod="kube-system/kube-proxy-tjbth"
	Oct 27 19:39:28 old-k8s-version-468959 kubelet[1413]: I1027 19:39:28.038532    1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c5669cde-bf50-4064-83c2-f5b82b3a2813-cni-cfg\") pod \"kindnet-td5zb\" (UID: \"c5669cde-bf50-4064-83c2-f5b82b3a2813\") " pod="kube-system/kindnet-td5zb"
	Oct 27 19:39:28 old-k8s-version-468959 kubelet[1413]: I1027 19:39:28.038567    1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4pb9\" (UniqueName: \"kubernetes.io/projected/834a476e-f5a7-4d1d-b8c6-43c163997c55-kube-api-access-j4pb9\") pod \"kube-proxy-tjbth\" (UID: \"834a476e-f5a7-4d1d-b8c6-43c163997c55\") " pod="kube-system/kube-proxy-tjbth"
	Oct 27 19:39:28 old-k8s-version-468959 kubelet[1413]: I1027 19:39:28.038599    1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5669cde-bf50-4064-83c2-f5b82b3a2813-lib-modules\") pod \"kindnet-td5zb\" (UID: \"c5669cde-bf50-4064-83c2-f5b82b3a2813\") " pod="kube-system/kindnet-td5zb"
	Oct 27 19:39:28 old-k8s-version-468959 kubelet[1413]: I1027 19:39:28.038665    1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5qf5\" (UniqueName: \"kubernetes.io/projected/c5669cde-bf50-4064-83c2-f5b82b3a2813-kube-api-access-p5qf5\") pod \"kindnet-td5zb\" (UID: \"c5669cde-bf50-4064-83c2-f5b82b3a2813\") " pod="kube-system/kindnet-td5zb"
	Oct 27 19:39:28 old-k8s-version-468959 kubelet[1413]: I1027 19:39:28.038716    1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/834a476e-f5a7-4d1d-b8c6-43c163997c55-lib-modules\") pod \"kube-proxy-tjbth\" (UID: \"834a476e-f5a7-4d1d-b8c6-43c163997c55\") " pod="kube-system/kube-proxy-tjbth"
	Oct 27 19:39:28 old-k8s-version-468959 kubelet[1413]: I1027 19:39:28.038837    1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/834a476e-f5a7-4d1d-b8c6-43c163997c55-xtables-lock\") pod \"kube-proxy-tjbth\" (UID: \"834a476e-f5a7-4d1d-b8c6-43c163997c55\") " pod="kube-system/kube-proxy-tjbth"
	Oct 27 19:39:30 old-k8s-version-468959 kubelet[1413]: I1027 19:39:30.553086    1413 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-tjbth" podStartSLOduration=3.553041819 podCreationTimestamp="2025-10-27 19:39:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:39:28.552548843 +0000 UTC m=+13.163253057" watchObservedRunningTime="2025-10-27 19:39:30.553041819 +0000 UTC m=+15.163746018"
	Oct 27 19:39:30 old-k8s-version-468959 kubelet[1413]: I1027 19:39:30.553227    1413 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-td5zb" podStartSLOduration=1.528443131 podCreationTimestamp="2025-10-27 19:39:27 +0000 UTC" firstStartedPulling="2025-10-27 19:39:28.269713195 +0000 UTC m=+12.880417379" lastFinishedPulling="2025-10-27 19:39:30.294470312 +0000 UTC m=+14.905174503" observedRunningTime="2025-10-27 19:39:30.552976779 +0000 UTC m=+15.163680983" watchObservedRunningTime="2025-10-27 19:39:30.553200255 +0000 UTC m=+15.163904457"
	Oct 27 19:39:41 old-k8s-version-468959 kubelet[1413]: I1027 19:39:41.185223    1413 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 27 19:39:41 old-k8s-version-468959 kubelet[1413]: I1027 19:39:41.217338    1413 topology_manager.go:215] "Topology Admit Handler" podUID="9fbb3702-fce5-44f8-b8ff-f267f9ca147f" podNamespace="kube-system" podName="storage-provisioner"
	Oct 27 19:39:41 old-k8s-version-468959 kubelet[1413]: I1027 19:39:41.221350    1413 topology_manager.go:215] "Topology Admit Handler" podUID="788993ae-aeb4-4fff-aaef-b7337405ca99" podNamespace="kube-system" podName="coredns-5dd5756b68-xwmdt"
	Oct 27 19:39:41 old-k8s-version-468959 kubelet[1413]: I1027 19:39:41.322834    1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mspp5\" (UniqueName: \"kubernetes.io/projected/9fbb3702-fce5-44f8-b8ff-f267f9ca147f-kube-api-access-mspp5\") pod \"storage-provisioner\" (UID: \"9fbb3702-fce5-44f8-b8ff-f267f9ca147f\") " pod="kube-system/storage-provisioner"
	Oct 27 19:39:41 old-k8s-version-468959 kubelet[1413]: I1027 19:39:41.322882    1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9fbb3702-fce5-44f8-b8ff-f267f9ca147f-tmp\") pod \"storage-provisioner\" (UID: \"9fbb3702-fce5-44f8-b8ff-f267f9ca147f\") " pod="kube-system/storage-provisioner"
	Oct 27 19:39:41 old-k8s-version-468959 kubelet[1413]: I1027 19:39:41.322904    1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9jwn\" (UniqueName: \"kubernetes.io/projected/788993ae-aeb4-4fff-aaef-b7337405ca99-kube-api-access-p9jwn\") pod \"coredns-5dd5756b68-xwmdt\" (UID: \"788993ae-aeb4-4fff-aaef-b7337405ca99\") " pod="kube-system/coredns-5dd5756b68-xwmdt"
	Oct 27 19:39:41 old-k8s-version-468959 kubelet[1413]: I1027 19:39:41.322928    1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/788993ae-aeb4-4fff-aaef-b7337405ca99-config-volume\") pod \"coredns-5dd5756b68-xwmdt\" (UID: \"788993ae-aeb4-4fff-aaef-b7337405ca99\") " pod="kube-system/coredns-5dd5756b68-xwmdt"
	Oct 27 19:39:42 old-k8s-version-468959 kubelet[1413]: I1027 19:39:42.609607    1413 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-xwmdt" podStartSLOduration=14.609549514 podCreationTimestamp="2025-10-27 19:39:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:39:42.596058686 +0000 UTC m=+27.206762887" watchObservedRunningTime="2025-10-27 19:39:42.609549514 +0000 UTC m=+27.220253714"
	Oct 27 19:39:42 old-k8s-version-468959 kubelet[1413]: I1027 19:39:42.621443    1413 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.621386309 podCreationTimestamp="2025-10-27 19:39:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:39:42.609689003 +0000 UTC m=+27.220393185" watchObservedRunningTime="2025-10-27 19:39:42.621386309 +0000 UTC m=+27.232090579"
	Oct 27 19:39:44 old-k8s-version-468959 kubelet[1413]: I1027 19:39:44.562384    1413 topology_manager.go:215] "Topology Admit Handler" podUID="46c055c9-34d3-4bb1-9d46-10ffe110ed16" podNamespace="default" podName="busybox"
	Oct 27 19:39:44 old-k8s-version-468959 kubelet[1413]: I1027 19:39:44.645225    1413 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkdbt\" (UniqueName: \"kubernetes.io/projected/46c055c9-34d3-4bb1-9d46-10ffe110ed16-kube-api-access-xkdbt\") pod \"busybox\" (UID: \"46c055c9-34d3-4bb1-9d46-10ffe110ed16\") " pod="default/busybox"
	Oct 27 19:39:46 old-k8s-version-468959 kubelet[1413]: I1027 19:39:46.609293    1413 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.709348548 podCreationTimestamp="2025-10-27 19:39:44 +0000 UTC" firstStartedPulling="2025-10-27 19:39:44.884536841 +0000 UTC m=+29.495241029" lastFinishedPulling="2025-10-27 19:39:45.784406165 +0000 UTC m=+30.395110359" observedRunningTime="2025-10-27 19:39:46.60919212 +0000 UTC m=+31.219896310" watchObservedRunningTime="2025-10-27 19:39:46.609217878 +0000 UTC m=+31.219922077"
	
	
	==> storage-provisioner [dcabac759190c6ca049f14a837d89a14c4115a7a713fcd01bc62adb12bdca963] <==
	I1027 19:39:41.592376       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1027 19:39:41.602707       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 19:39:41.602811       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1027 19:39:41.612862       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 19:39:41.613033       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-468959_1b7632b1-3d93-4d9b-b1d9-36b643287460!
	I1027 19:39:41.612965       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"677bc0f8-1050-43ba-894e-0ebdacb32030", APIVersion:"v1", ResourceVersion:"398", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-468959_1b7632b1-3d93-4d9b-b1d9-36b643287460 became leader
	I1027 19:39:41.713913       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-468959_1b7632b1-3d93-4d9b-b1d9-36b643287460!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-468959 -n old-k8s-version-468959
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-468959 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.76s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-919237 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-919237 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (467.925961ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:40:41Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-919237 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-919237 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-919237 describe deploy/metrics-server -n kube-system: exit status 1 (62.903707ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-919237 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-919237
helpers_test.go:243: (dbg) docker inspect embed-certs-919237:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "37808aa2dc4c4127748e535c42c1ec4333eeed40f14d98040de3f085b9d38b11",
	        "Created": "2025-10-27T19:39:55.06890143Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 580426,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T19:39:55.11528435Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/37808aa2dc4c4127748e535c42c1ec4333eeed40f14d98040de3f085b9d38b11/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/37808aa2dc4c4127748e535c42c1ec4333eeed40f14d98040de3f085b9d38b11/hostname",
	        "HostsPath": "/var/lib/docker/containers/37808aa2dc4c4127748e535c42c1ec4333eeed40f14d98040de3f085b9d38b11/hosts",
	        "LogPath": "/var/lib/docker/containers/37808aa2dc4c4127748e535c42c1ec4333eeed40f14d98040de3f085b9d38b11/37808aa2dc4c4127748e535c42c1ec4333eeed40f14d98040de3f085b9d38b11-json.log",
	        "Name": "/embed-certs-919237",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-919237:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-919237",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "37808aa2dc4c4127748e535c42c1ec4333eeed40f14d98040de3f085b9d38b11",
	                "LowerDir": "/var/lib/docker/overlay2/1a197dc40b03763e74d9e2a466d399c472fd8d02996bb7655be8275cee948408-init/diff:/var/lib/docker/overlay2/71b61ec94610a35f2d924dec358052d4c154c36b3fe219802f60246ca2dc7f45/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1a197dc40b03763e74d9e2a466d399c472fd8d02996bb7655be8275cee948408/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1a197dc40b03763e74d9e2a466d399c472fd8d02996bb7655be8275cee948408/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1a197dc40b03763e74d9e2a466d399c472fd8d02996bb7655be8275cee948408/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-919237",
	                "Source": "/var/lib/docker/volumes/embed-certs-919237/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-919237",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-919237",
	                "name.minikube.sigs.k8s.io": "embed-certs-919237",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7298dad8a5d93218637044c6c8ebefc30875f67d84ed4291f9b4033ba7e57939",
	            "SandboxKey": "/var/run/docker/netns/7298dad8a5d9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-919237": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "06:dc:96:96:b8:30",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "999393307eef706ac69479cce1c654e615bbf1533042b5bf717c2605b3087cda",
	                    "EndpointID": "170d8cb6a84c615143b488108eededdc687c946a260a128324c85f9760506fce",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-919237",
	                        "37808aa2dc4c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-919237 -n embed-certs-919237
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-919237 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-919237 logs -n 25: (1.325643801s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-387383 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-387383                │ jenkins │ v1.37.0 │ 27 Oct 25 19:38 UTC │                     │
	│ ssh     │ -p cilium-387383 sudo containerd config dump                                                                                                                                                                                                  │ cilium-387383                │ jenkins │ v1.37.0 │ 27 Oct 25 19:38 UTC │                     │
	│ ssh     │ -p cilium-387383 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-387383                │ jenkins │ v1.37.0 │ 27 Oct 25 19:38 UTC │                     │
	│ ssh     │ -p cilium-387383 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-387383                │ jenkins │ v1.37.0 │ 27 Oct 25 19:38 UTC │                     │
	│ ssh     │ -p cilium-387383 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-387383                │ jenkins │ v1.37.0 │ 27 Oct 25 19:38 UTC │                     │
	│ ssh     │ -p cilium-387383 sudo crio config                                                                                                                                                                                                             │ cilium-387383                │ jenkins │ v1.37.0 │ 27 Oct 25 19:38 UTC │                     │
	│ delete  │ -p cilium-387383                                                                                                                                                                                                                              │ cilium-387383                │ jenkins │ v1.37.0 │ 27 Oct 25 19:38 UTC │ 27 Oct 25 19:38 UTC │
	│ start   │ -p old-k8s-version-468959 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-468959       │ jenkins │ v1.37.0 │ 27 Oct 25 19:38 UTC │ 27 Oct 25 19:39 UTC │
	│ delete  │ -p NoKubernetes-668991                                                                                                                                                                                                                        │ NoKubernetes-668991          │ jenkins │ v1.37.0 │ 27 Oct 25 19:38 UTC │ 27 Oct 25 19:39 UTC │
	│ start   │ -p NoKubernetes-668991 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-668991          │ jenkins │ v1.37.0 │ 27 Oct 25 19:39 UTC │ 27 Oct 25 19:39 UTC │
	│ ssh     │ -p NoKubernetes-668991 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-668991          │ jenkins │ v1.37.0 │ 27 Oct 25 19:39 UTC │                     │
	│ stop    │ -p NoKubernetes-668991                                                                                                                                                                                                                        │ NoKubernetes-668991          │ jenkins │ v1.37.0 │ 27 Oct 25 19:39 UTC │ 27 Oct 25 19:39 UTC │
	│ start   │ -p NoKubernetes-668991 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-668991          │ jenkins │ v1.37.0 │ 27 Oct 25 19:39 UTC │ 27 Oct 25 19:39 UTC │
	│ ssh     │ -p NoKubernetes-668991 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-668991          │ jenkins │ v1.37.0 │ 27 Oct 25 19:39 UTC │                     │
	│ delete  │ -p NoKubernetes-668991                                                                                                                                                                                                                        │ NoKubernetes-668991          │ jenkins │ v1.37.0 │ 27 Oct 25 19:39 UTC │ 27 Oct 25 19:39 UTC │
	│ start   │ -p embed-certs-919237 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:39 UTC │ 27 Oct 25 19:40 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-468959 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-468959       │ jenkins │ v1.37.0 │ 27 Oct 25 19:39 UTC │                     │
	│ stop    │ -p old-k8s-version-468959 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-468959       │ jenkins │ v1.37.0 │ 27 Oct 25 19:39 UTC │ 27 Oct 25 19:40 UTC │
	│ start   │ -p cert-expiration-368442 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-368442       │ jenkins │ v1.37.0 │ 27 Oct 25 19:40 UTC │ 27 Oct 25 19:40 UTC │
	│ delete  │ -p cert-expiration-368442                                                                                                                                                                                                                     │ cert-expiration-368442       │ jenkins │ v1.37.0 │ 27 Oct 25 19:40 UTC │ 27 Oct 25 19:40 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-468959 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-468959       │ jenkins │ v1.37.0 │ 27 Oct 25 19:40 UTC │ 27 Oct 25 19:40 UTC │
	│ start   │ -p old-k8s-version-468959 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-468959       │ jenkins │ v1.37.0 │ 27 Oct 25 19:40 UTC │                     │
	│ delete  │ -p disable-driver-mounts-926399                                                                                                                                                                                                               │ disable-driver-mounts-926399 │ jenkins │ v1.37.0 │ 27 Oct 25 19:40 UTC │ 27 Oct 25 19:40 UTC │
	│ start   │ -p no-preload-095885 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:40 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-919237 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:40 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 19:40:13
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 19:40:13.988092  585556 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:40:13.988450  585556 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:40:13.988461  585556 out.go:374] Setting ErrFile to fd 2...
	I1027 19:40:13.988465  585556 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:40:13.988659  585556 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 19:40:13.989261  585556 out.go:368] Setting JSON to false
	I1027 19:40:13.990685  585556 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8563,"bootTime":1761585451,"procs":339,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 19:40:13.990787  585556 start.go:141] virtualization: kvm guest
	I1027 19:40:13.992798  585556 out.go:179] * [no-preload-095885] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 19:40:13.994860  585556 notify.go:220] Checking for updates...
	I1027 19:40:13.994915  585556 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 19:40:13.996547  585556 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 19:40:13.997936  585556 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:40:13.999083  585556 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-352833/.minikube
	I1027 19:40:14.000447  585556 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 19:40:14.001777  585556 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 19:40:14.003551  585556 config.go:182] Loaded profile config "embed-certs-919237": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:40:14.003664  585556 config.go:182] Loaded profile config "kubernetes-upgrade-360986": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:40:14.003777  585556 config.go:182] Loaded profile config "old-k8s-version-468959": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1027 19:40:14.003888  585556 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 19:40:14.032750  585556 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 19:40:14.032853  585556 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:40:14.100831  585556 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-27 19:40:14.086994027 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:40:14.100990  585556 docker.go:318] overlay module found
	I1027 19:40:14.103795  585556 out.go:179] * Using the docker driver based on user configuration
	I1027 19:40:14.105098  585556 start.go:305] selected driver: docker
	I1027 19:40:14.105121  585556 start.go:925] validating driver "docker" against <nil>
	I1027 19:40:14.105149  585556 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 19:40:14.106110  585556 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:40:14.176400  585556 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-10-27 19:40:14.164379307 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:40:14.176665  585556 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1027 19:40:14.176909  585556 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 19:40:14.178855  585556 out.go:179] * Using Docker driver with root privileges
	I1027 19:40:14.180085  585556 cni.go:84] Creating CNI manager for ""
	I1027 19:40:14.180198  585556 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:40:14.180216  585556 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 19:40:14.180329  585556 start.go:349] cluster config:
	{Name:no-preload-095885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-095885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:40:14.182111  585556 out.go:179] * Starting "no-preload-095885" primary control-plane node in "no-preload-095885" cluster
	I1027 19:40:14.183445  585556 cache.go:123] Beginning downloading kic base image for docker with crio
	I1027 19:40:14.184810  585556 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 19:40:14.186203  585556 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:40:14.186238  585556 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 19:40:14.186361  585556 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/config.json ...
	I1027 19:40:14.186403  585556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/config.json: {Name:mk267f5783f71ba278edab0b51e6a6edc33b9089 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:40:14.186490  585556 cache.go:107] acquiring lock: {Name:mk01b17b21d46030a4c787d0bd4e9fe1b72ed247 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:40:14.186490  585556 cache.go:107] acquiring lock: {Name:mk6cfd97bf118a5d00dc3712cc15a56368d5b133 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:40:14.186517  585556 cache.go:107] acquiring lock: {Name:mk55852f2c481df2db7f9a6da7c274b8e85d7edb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:40:14.186543  585556 cache.go:107] acquiring lock: {Name:mk5369f4c071c5263ddc432fb15330ba0423cdfe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:40:14.186600  585556 cache.go:107] acquiring lock: {Name:mk5cfaf9a7e19dd9a7184f304b6ee85a4979e6eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:40:14.186635  585556 cache.go:107] acquiring lock: {Name:mka4e762c0cdf96fdeade218e5825c211c417983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:40:14.186684  585556 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1027 19:40:14.186691  585556 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 19:40:14.186672  585556 cache.go:107] acquiring lock: {Name:mk2ed104f61ec06a04ca37afb2389902cee0a37d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:40:14.186642  585556 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1027 19:40:14.186755  585556 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1027 19:40:14.186773  585556 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1027 19:40:14.186783  585556 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1027 19:40:14.186951  585556 cache.go:115] /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1027 19:40:14.186941  585556 cache.go:107] acquiring lock: {Name:mk849f9e68d9ca24fd7e38d749b2eace2906ff3c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:40:14.186963  585556 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 487.777µs
	I1027 19:40:14.186979  585556 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1027 19:40:14.187033  585556 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1027 19:40:14.188216  585556 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 19:40:14.188214  585556 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1027 19:40:14.188259  585556 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1027 19:40:14.188345  585556 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1027 19:40:14.188352  585556 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1027 19:40:14.188377  585556 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1027 19:40:14.188561  585556 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1027 19:40:14.212425  585556 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 19:40:14.212455  585556 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 19:40:14.212480  585556 cache.go:232] Successfully downloaded all kic artifacts
	I1027 19:40:14.212521  585556 start.go:360] acquireMachinesLock for no-preload-095885: {Name:mk5366014920cd048c3c430c094258bb47a34d04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:40:14.212651  585556 start.go:364] duration metric: took 103.449µs to acquireMachinesLock for "no-preload-095885"
	I1027 19:40:14.212687  585556 start.go:93] Provisioning new machine with config: &{Name:no-preload-095885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-095885 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 19:40:14.212780  585556 start.go:125] createHost starting for "" (driver="docker")
	I1027 19:40:13.197950  579549 out.go:252]   - Configuring RBAC rules ...
	I1027 19:40:13.198106  579549 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 19:40:13.201782  579549 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 19:40:13.211301  579549 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 19:40:13.214775  579549 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 19:40:13.224186  579549 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 19:40:13.228325  579549 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 19:40:13.542412  579549 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 19:40:13.971112  579549 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1027 19:40:14.541246  579549 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1027 19:40:14.542531  579549 kubeadm.go:318] 
	I1027 19:40:14.542639  579549 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1027 19:40:14.542650  579549 kubeadm.go:318] 
	I1027 19:40:14.542768  579549 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1027 19:40:14.542777  579549 kubeadm.go:318] 
	I1027 19:40:14.542809  579549 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1027 19:40:14.542881  579549 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 19:40:14.542947  579549 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 19:40:14.542953  579549 kubeadm.go:318] 
	I1027 19:40:14.543022  579549 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1027 19:40:14.543027  579549 kubeadm.go:318] 
	I1027 19:40:14.543084  579549 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 19:40:14.543097  579549 kubeadm.go:318] 
	I1027 19:40:14.543191  579549 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1027 19:40:14.543303  579549 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 19:40:14.543560  579549 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 19:40:14.543574  579549 kubeadm.go:318] 
	I1027 19:40:14.543684  579549 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 19:40:14.543808  579549 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1027 19:40:14.543823  579549 kubeadm.go:318] 
	I1027 19:40:14.543926  579549 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token z8fkzn.3wwpaxw8eewim5c7 \
	I1027 19:40:14.544072  579549 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ab29e81999671591f366788f5ae9ffb132789ebc71f7c0efdaecd38575a5ab6a \
	I1027 19:40:14.544110  579549 kubeadm.go:318] 	--control-plane 
	I1027 19:40:14.544120  579549 kubeadm.go:318] 
	I1027 19:40:14.544313  579549 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1027 19:40:14.544342  579549 kubeadm.go:318] 
	I1027 19:40:14.544466  579549 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token z8fkzn.3wwpaxw8eewim5c7 \
	I1027 19:40:14.544599  579549 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ab29e81999671591f366788f5ae9ffb132789ebc71f7c0efdaecd38575a5ab6a 
	I1027 19:40:14.547754  579549 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1027 19:40:14.547917  579549 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 19:40:14.547948  579549 cni.go:84] Creating CNI manager for ""
	I1027 19:40:14.547961  579549 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:40:14.564799  579549 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1027 19:40:14.574313  579549 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 19:40:14.579518  579549 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 19:40:14.579541  579549 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1027 19:40:14.597181  579549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 19:40:14.035718  565798 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1027 19:40:14.036192  565798 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1027 19:40:14.036251  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:40:14.036308  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:40:14.072904  565798 cri.go:89] found id: "047d3a4a3e1e5638984a05fc7ebff787c5c5c7f381d978e93c663acb37994b72"
	I1027 19:40:14.072922  565798 cri.go:89] found id: ""
	I1027 19:40:14.072931  565798 logs.go:282] 1 containers: [047d3a4a3e1e5638984a05fc7ebff787c5c5c7f381d978e93c663acb37994b72]
	I1027 19:40:14.072985  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:40:14.077867  565798 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:40:14.077979  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:40:14.111599  565798 cri.go:89] found id: ""
	I1027 19:40:14.111625  565798 logs.go:282] 0 containers: []
	W1027 19:40:14.111633  565798 logs.go:284] No container was found matching "etcd"
	I1027 19:40:14.111640  565798 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:40:14.111690  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:40:14.148498  565798 cri.go:89] found id: ""
	I1027 19:40:14.148531  565798 logs.go:282] 0 containers: []
	W1027 19:40:14.148542  565798 logs.go:284] No container was found matching "coredns"
	I1027 19:40:14.148550  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:40:14.148639  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:40:14.181128  565798 cri.go:89] found id: "15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:40:14.181163  565798 cri.go:89] found id: ""
	I1027 19:40:14.181204  565798 logs.go:282] 1 containers: [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8]
	I1027 19:40:14.181289  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:40:14.185991  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:40:14.186068  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:40:14.222389  565798 cri.go:89] found id: ""
	I1027 19:40:14.222416  565798 logs.go:282] 0 containers: []
	W1027 19:40:14.222427  565798 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:40:14.222436  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:40:14.222498  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:40:14.256592  565798 cri.go:89] found id: "df060ac929bc7a5dac337c7e85e10b2f4a51413be70b8202c8307826c4a72947"
	I1027 19:40:14.256624  565798 cri.go:89] found id: ""
	I1027 19:40:14.256638  565798 logs.go:282] 1 containers: [df060ac929bc7a5dac337c7e85e10b2f4a51413be70b8202c8307826c4a72947]
	I1027 19:40:14.256699  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:40:14.261207  565798 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:40:14.261285  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:40:14.292988  565798 cri.go:89] found id: ""
	I1027 19:40:14.293008  565798 logs.go:282] 0 containers: []
	W1027 19:40:14.293015  565798 logs.go:284] No container was found matching "kindnet"
	I1027 19:40:14.293021  565798 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:40:14.293066  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:40:14.326480  565798 cri.go:89] found id: ""
	I1027 19:40:14.326508  565798 logs.go:282] 0 containers: []
	W1027 19:40:14.326516  565798 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:40:14.326526  565798 logs.go:123] Gathering logs for dmesg ...
	I1027 19:40:14.326539  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:40:14.350103  565798 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:40:14.350144  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:40:14.413973  565798 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:40:14.413998  565798 logs.go:123] Gathering logs for kube-apiserver [047d3a4a3e1e5638984a05fc7ebff787c5c5c7f381d978e93c663acb37994b72] ...
	I1027 19:40:14.414014  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 047d3a4a3e1e5638984a05fc7ebff787c5c5c7f381d978e93c663acb37994b72"
	I1027 19:40:14.452205  565798 logs.go:123] Gathering logs for kube-scheduler [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8] ...
	I1027 19:40:14.452240  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:40:14.510349  565798 logs.go:123] Gathering logs for kube-controller-manager [df060ac929bc7a5dac337c7e85e10b2f4a51413be70b8202c8307826c4a72947] ...
	I1027 19:40:14.510380  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 df060ac929bc7a5dac337c7e85e10b2f4a51413be70b8202c8307826c4a72947"
	I1027 19:40:14.545999  565798 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:40:14.546028  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:40:14.603544  565798 logs.go:123] Gathering logs for container status ...
	I1027 19:40:14.603576  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:40:14.644638  565798 logs.go:123] Gathering logs for kubelet ...
	I1027 19:40:14.644674  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:40:13.043600  584758 out.go:252] * Restarting existing docker container for "old-k8s-version-468959" ...
	I1027 19:40:13.043676  584758 cli_runner.go:164] Run: docker start old-k8s-version-468959
	I1027 19:40:13.334326  584758 cli_runner.go:164] Run: docker container inspect old-k8s-version-468959 --format={{.State.Status}}
	I1027 19:40:13.354650  584758 kic.go:430] container "old-k8s-version-468959" state is running.
	I1027 19:40:13.355047  584758 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-468959
	I1027 19:40:13.375325  584758 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/old-k8s-version-468959/config.json ...
	I1027 19:40:13.375623  584758 machine.go:93] provisionDockerMachine start ...
	I1027 19:40:13.375727  584758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-468959
	I1027 19:40:13.396375  584758 main.go:141] libmachine: Using SSH client type: native
	I1027 19:40:13.396598  584758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I1027 19:40:13.396610  584758 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 19:40:13.397222  584758 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41886->127.0.0.1:33435: read: connection reset by peer
	I1027 19:40:16.542706  584758 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-468959
	
	I1027 19:40:16.542741  584758 ubuntu.go:182] provisioning hostname "old-k8s-version-468959"
	I1027 19:40:16.542832  584758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-468959
	I1027 19:40:16.562891  584758 main.go:141] libmachine: Using SSH client type: native
	I1027 19:40:16.563122  584758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I1027 19:40:16.563154  584758 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-468959 && echo "old-k8s-version-468959" | sudo tee /etc/hostname
	I1027 19:40:16.719259  584758 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-468959
	
	I1027 19:40:16.719366  584758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-468959
	I1027 19:40:16.739054  584758 main.go:141] libmachine: Using SSH client type: native
	I1027 19:40:16.739412  584758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I1027 19:40:16.739437  584758 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-468959' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-468959/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-468959' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 19:40:16.883078  584758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 19:40:16.883109  584758 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21801-352833/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-352833/.minikube}
	I1027 19:40:16.883180  584758 ubuntu.go:190] setting up certificates
	I1027 19:40:16.883200  584758 provision.go:84] configureAuth start
	I1027 19:40:16.883272  584758 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-468959
	I1027 19:40:16.901299  584758 provision.go:143] copyHostCerts
	I1027 19:40:16.901390  584758 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem, removing ...
	I1027 19:40:16.901412  584758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem
	I1027 19:40:16.901476  584758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem (1078 bytes)
	I1027 19:40:16.901619  584758 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem, removing ...
	I1027 19:40:16.901633  584758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem
	I1027 19:40:16.901665  584758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem (1123 bytes)
	I1027 19:40:16.901762  584758 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem, removing ...
	I1027 19:40:16.901776  584758 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem
	I1027 19:40:16.901806  584758 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem (1679 bytes)
	I1027 19:40:16.901898  584758 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-468959 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-468959]
	I1027 19:40:17.042656  584758 provision.go:177] copyRemoteCerts
	I1027 19:40:17.042721  584758 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 19:40:17.042757  584758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-468959
	I1027 19:40:17.063614  584758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/old-k8s-version-468959/id_rsa Username:docker}
	I1027 19:40:17.166335  584758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 19:40:17.185645  584758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1027 19:40:17.205420  584758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1027 19:40:17.224840  584758 provision.go:87] duration metric: took 341.619799ms to configureAuth
	I1027 19:40:17.224871  584758 ubuntu.go:206] setting minikube options for container-runtime
	I1027 19:40:17.225048  584758 config.go:182] Loaded profile config "old-k8s-version-468959": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1027 19:40:17.225167  584758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-468959
	I1027 19:40:17.246150  584758 main.go:141] libmachine: Using SSH client type: native
	I1027 19:40:17.246397  584758 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33435 <nil> <nil>}
	I1027 19:40:17.246423  584758 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 19:40:17.588749  584758 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 19:40:17.588780  584758 machine.go:96] duration metric: took 4.213135914s to provisionDockerMachine
	I1027 19:40:17.588795  584758 start.go:293] postStartSetup for "old-k8s-version-468959" (driver="docker")
	I1027 19:40:17.588809  584758 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 19:40:17.588883  584758 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 19:40:17.588938  584758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-468959
	I1027 19:40:17.609522  584758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/old-k8s-version-468959/id_rsa Username:docker}
	I1027 19:40:17.713801  584758 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 19:40:17.718097  584758 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 19:40:17.718130  584758 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 19:40:17.718178  584758 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-352833/.minikube/addons for local assets ...
	I1027 19:40:17.718244  584758 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-352833/.minikube/files for local assets ...
	I1027 19:40:17.718355  584758 filesync.go:149] local asset: /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem -> 3564152.pem in /etc/ssl/certs
	I1027 19:40:17.718482  584758 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 19:40:17.727948  584758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem --> /etc/ssl/certs/3564152.pem (1708 bytes)
	I1027 19:40:17.748096  584758 start.go:296] duration metric: took 159.279782ms for postStartSetup
	I1027 19:40:17.748197  584758 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 19:40:17.748251  584758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-468959
	I1027 19:40:17.769681  584758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/old-k8s-version-468959/id_rsa Username:docker}
	I1027 19:40:14.894878  579549 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 19:40:14.894959  579549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:40:14.894981  579549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-919237 minikube.k8s.io/updated_at=2025_10_27T19_40_14_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f minikube.k8s.io/name=embed-certs-919237 minikube.k8s.io/primary=true
	I1027 19:40:14.907925  579549 ops.go:34] apiserver oom_adj: -16
	I1027 19:40:14.989734  579549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:40:15.490000  579549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:40:15.990360  579549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:40:16.490293  579549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:40:16.990364  579549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:40:17.489933  579549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:40:17.989863  579549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:40:18.492279  579549 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:40:18.568028  579549 kubeadm.go:1113] duration metric: took 3.673128622s to wait for elevateKubeSystemPrivileges
	I1027 19:40:18.568070  579549 kubeadm.go:402] duration metric: took 15.819116747s to StartCluster
	I1027 19:40:18.568095  579549 settings.go:142] acquiring lock: {Name:mk8304c2106bf310642e0949fc0266ccb50f2f95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:40:18.568275  579549 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:40:18.570002  579549 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/kubeconfig: {Name:mk24cbe512a6907c874f3fb7a85390a8f9fd2b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:40:18.570379  579549 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 19:40:18.570392  579549 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 19:40:18.570476  579549 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 19:40:18.570573  579549 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-919237"
	I1027 19:40:18.570593  579549 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-919237"
	I1027 19:40:18.570594  579549 config.go:182] Loaded profile config "embed-certs-919237": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:40:18.570612  579549 addons.go:69] Setting default-storageclass=true in profile "embed-certs-919237"
	I1027 19:40:18.570628  579549 host.go:66] Checking if "embed-certs-919237" exists ...
	I1027 19:40:18.570635  579549 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-919237"
	I1027 19:40:18.571013  579549 cli_runner.go:164] Run: docker container inspect embed-certs-919237 --format={{.State.Status}}
	I1027 19:40:18.571202  579549 cli_runner.go:164] Run: docker container inspect embed-certs-919237 --format={{.State.Status}}
	I1027 19:40:18.572707  579549 out.go:179] * Verifying Kubernetes components...
	I1027 19:40:18.574365  579549 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:40:18.607586  579549 addons.go:238] Setting addon default-storageclass=true in "embed-certs-919237"
	I1027 19:40:18.607639  579549 host.go:66] Checking if "embed-certs-919237" exists ...
	I1027 19:40:18.608281  579549 cli_runner.go:164] Run: docker container inspect embed-certs-919237 --format={{.State.Status}}
	I1027 19:40:18.610322  579549 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:40:14.215752  585556 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1027 19:40:14.215976  585556 start.go:159] libmachine.API.Create for "no-preload-095885" (driver="docker")
	I1027 19:40:14.216008  585556 client.go:168] LocalClient.Create starting
	I1027 19:40:14.216090  585556 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem
	I1027 19:40:14.216157  585556 main.go:141] libmachine: Decoding PEM data...
	I1027 19:40:14.216180  585556 main.go:141] libmachine: Parsing certificate...
	I1027 19:40:14.216252  585556 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem
	I1027 19:40:14.216283  585556 main.go:141] libmachine: Decoding PEM data...
	I1027 19:40:14.216298  585556 main.go:141] libmachine: Parsing certificate...
	I1027 19:40:14.216733  585556 cli_runner.go:164] Run: docker network inspect no-preload-095885 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 19:40:14.238005  585556 cli_runner.go:211] docker network inspect no-preload-095885 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 19:40:14.238083  585556 network_create.go:284] running [docker network inspect no-preload-095885] to gather additional debugging logs...
	I1027 19:40:14.238108  585556 cli_runner.go:164] Run: docker network inspect no-preload-095885
	W1027 19:40:14.258358  585556 cli_runner.go:211] docker network inspect no-preload-095885 returned with exit code 1
	I1027 19:40:14.258395  585556 network_create.go:287] error running [docker network inspect no-preload-095885]: docker network inspect no-preload-095885: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-095885 not found
	I1027 19:40:14.258417  585556 network_create.go:289] output of [docker network inspect no-preload-095885]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-095885 not found
	
	** /stderr **
	I1027 19:40:14.258504  585556 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 19:40:14.279714  585556 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-04e197bde7e8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:8c:cb:7c:68:31} reservation:<nil>}
	I1027 19:40:14.280623  585556 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e37fd2b092bc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:26:98:e3:c0:d9:8a} reservation:<nil>}
	I1027 19:40:14.281080  585556 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-bbd9ae70d20d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ea:7f:4f:eb:e4:a1} reservation:<nil>}
	I1027 19:40:14.281720  585556 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000361600}
	I1027 19:40:14.281752  585556 network_create.go:124] attempt to create docker network no-preload-095885 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1027 19:40:14.281806  585556 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-095885 no-preload-095885
	I1027 19:40:14.348718  585556 network_create.go:108] docker network no-preload-095885 192.168.76.0/24 created
	I1027 19:40:14.348759  585556 kic.go:121] calculated static IP "192.168.76.2" for the "no-preload-095885" container
	I1027 19:40:14.348819  585556 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 19:40:14.358431  585556 cache.go:162] opening:  /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1027 19:40:14.367608  585556 cache.go:162] opening:  /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1027 19:40:14.369055  585556 cli_runner.go:164] Run: docker volume create no-preload-095885 --label name.minikube.sigs.k8s.io=no-preload-095885 --label created_by.minikube.sigs.k8s.io=true
	I1027 19:40:14.370018  585556 cache.go:162] opening:  /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1027 19:40:14.373294  585556 cache.go:162] opening:  /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1027 19:40:14.377389  585556 cache.go:162] opening:  /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1027 19:40:14.378421  585556 cache.go:162] opening:  /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1027 19:40:14.378967  585556 cache.go:162] opening:  /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1027 19:40:14.390465  585556 oci.go:103] Successfully created a docker volume no-preload-095885
	I1027 19:40:14.390551  585556 cli_runner.go:164] Run: docker run --rm --name no-preload-095885-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-095885 --entrypoint /usr/bin/test -v no-preload-095885:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 19:40:14.453770  585556 cache.go:157] /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1027 19:40:14.453802  585556 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 267.183179ms
	I1027 19:40:14.453819  585556 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1027 19:40:14.713101  585556 cache.go:157] /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1027 19:40:14.713161  585556 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 526.68251ms
	I1027 19:40:14.713181  585556 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1027 19:40:14.898760  585556 oci.go:107] Successfully prepared a docker volume no-preload-095885
	I1027 19:40:14.898803  585556 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1027 19:40:14.898908  585556 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1027 19:40:14.898951  585556 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1027 19:40:14.899001  585556 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1027 19:40:14.974626  585556 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-095885 --name no-preload-095885 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-095885 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-095885 --network no-preload-095885 --ip 192.168.76.2 --volume no-preload-095885:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1027 19:40:15.280380  585556 cli_runner.go:164] Run: docker container inspect no-preload-095885 --format={{.State.Running}}
	I1027 19:40:15.299941  585556 cli_runner.go:164] Run: docker container inspect no-preload-095885 --format={{.State.Status}}
	I1027 19:40:15.320009  585556 cli_runner.go:164] Run: docker exec no-preload-095885 stat /var/lib/dpkg/alternatives/iptables
	I1027 19:40:15.367276  585556 oci.go:144] the created container "no-preload-095885" has a running status.
	I1027 19:40:15.367313  585556 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21801-352833/.minikube/machines/no-preload-095885/id_rsa...
	I1027 19:40:15.742096  585556 cache.go:157] /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1027 19:40:15.742154  585556 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.555488877s
	I1027 19:40:15.742173  585556 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1027 19:40:15.804104  585556 cache.go:157] /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1027 19:40:15.804153  585556 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.61721512s
	I1027 19:40:15.804174  585556 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1027 19:40:15.820532  585556 cache.go:157] /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1027 19:40:15.820572  585556 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.634028443s
	I1027 19:40:15.820591  585556 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1027 19:40:15.865040  585556 cache.go:157] /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1027 19:40:15.865067  585556 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.678591911s
	I1027 19:40:15.865078  585556 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1027 19:40:15.929051  585556 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21801-352833/.minikube/machines/no-preload-095885/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1027 19:40:15.964455  585556 cli_runner.go:164] Run: docker container inspect no-preload-095885 --format={{.State.Status}}
	I1027 19:40:15.986408  585556 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1027 19:40:15.986436  585556 kic_runner.go:114] Args: [docker exec --privileged no-preload-095885 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1027 19:40:16.043707  585556 cli_runner.go:164] Run: docker container inspect no-preload-095885 --format={{.State.Status}}
	I1027 19:40:16.065655  585556 machine.go:93] provisionDockerMachine start ...
	I1027 19:40:16.065757  585556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-095885
	I1027 19:40:16.086426  585556 main.go:141] libmachine: Using SSH client type: native
	I1027 19:40:16.086667  585556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1027 19:40:16.086681  585556 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 19:40:16.211207  585556 cache.go:157] /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1027 19:40:16.211237  585556 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 2.024685945s
	I1027 19:40:16.211253  585556 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1027 19:40:16.211274  585556 cache.go:87] Successfully saved all images to host disk.
	I1027 19:40:16.232693  585556 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-095885
	
	I1027 19:40:16.232726  585556 ubuntu.go:182] provisioning hostname "no-preload-095885"
	I1027 19:40:16.232791  585556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-095885
	I1027 19:40:16.254080  585556 main.go:141] libmachine: Using SSH client type: native
	I1027 19:40:16.254385  585556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1027 19:40:16.254410  585556 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-095885 && echo "no-preload-095885" | sudo tee /etc/hostname
	I1027 19:40:16.408723  585556 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-095885
	
	I1027 19:40:16.408815  585556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-095885
	I1027 19:40:16.428843  585556 main.go:141] libmachine: Using SSH client type: native
	I1027 19:40:16.429085  585556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1027 19:40:16.429107  585556 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-095885' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-095885/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-095885' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 19:40:16.575178  585556 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 19:40:16.575215  585556 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21801-352833/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-352833/.minikube}
	I1027 19:40:16.575242  585556 ubuntu.go:190] setting up certificates
	I1027 19:40:16.575258  585556 provision.go:84] configureAuth start
	I1027 19:40:16.575321  585556 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-095885
	I1027 19:40:16.594445  585556 provision.go:143] copyHostCerts
	I1027 19:40:16.594528  585556 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem, removing ...
	I1027 19:40:16.594545  585556 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem
	I1027 19:40:16.594633  585556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem (1078 bytes)
	I1027 19:40:16.594750  585556 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem, removing ...
	I1027 19:40:16.594764  585556 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem
	I1027 19:40:16.594805  585556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem (1123 bytes)
	I1027 19:40:16.594884  585556 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem, removing ...
	I1027 19:40:16.594897  585556 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem
	I1027 19:40:16.594932  585556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem (1679 bytes)
	I1027 19:40:16.595002  585556 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem org=jenkins.no-preload-095885 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-095885]
	I1027 19:40:16.823099  585556 provision.go:177] copyRemoteCerts
	I1027 19:40:16.823166  585556 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 19:40:16.823206  585556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-095885
	I1027 19:40:16.843639  585556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/no-preload-095885/id_rsa Username:docker}
	I1027 19:40:16.945191  585556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 19:40:16.965996  585556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1027 19:40:16.985655  585556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 19:40:17.005512  585556 provision.go:87] duration metric: took 430.232528ms to configureAuth
	I1027 19:40:17.005545  585556 ubuntu.go:206] setting minikube options for container-runtime
	I1027 19:40:17.005787  585556 config.go:182] Loaded profile config "no-preload-095885": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:40:17.005906  585556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-095885
	I1027 19:40:17.026801  585556 main.go:141] libmachine: Using SSH client type: native
	I1027 19:40:17.027034  585556 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33440 <nil> <nil>}
	I1027 19:40:17.027050  585556 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 19:40:17.304403  585556 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 19:40:17.304432  585556 machine.go:96] duration metric: took 1.238754224s to provisionDockerMachine
	I1027 19:40:17.304445  585556 client.go:171] duration metric: took 3.08842976s to LocalClient.Create
	I1027 19:40:17.304471  585556 start.go:167] duration metric: took 3.088497976s to libmachine.API.Create "no-preload-095885"
	I1027 19:40:17.304484  585556 start.go:293] postStartSetup for "no-preload-095885" (driver="docker")
	I1027 19:40:17.304517  585556 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 19:40:17.304603  585556 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 19:40:17.304652  585556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-095885
	I1027 19:40:17.325993  585556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/no-preload-095885/id_rsa Username:docker}
	I1027 19:40:17.434871  585556 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 19:40:17.439429  585556 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 19:40:17.439472  585556 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 19:40:17.439486  585556 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-352833/.minikube/addons for local assets ...
	I1027 19:40:17.439549  585556 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-352833/.minikube/files for local assets ...
	I1027 19:40:17.439653  585556 filesync.go:149] local asset: /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem -> 3564152.pem in /etc/ssl/certs
	I1027 19:40:17.439774  585556 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 19:40:17.448871  585556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem --> /etc/ssl/certs/3564152.pem (1708 bytes)
	I1027 19:40:17.473830  585556 start.go:296] duration metric: took 169.326801ms for postStartSetup
	I1027 19:40:17.474272  585556 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-095885
	I1027 19:40:17.496611  585556 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/config.json ...
	I1027 19:40:17.496961  585556 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 19:40:17.497020  585556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-095885
	I1027 19:40:17.520166  585556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/no-preload-095885/id_rsa Username:docker}
	I1027 19:40:17.627799  585556 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 19:40:17.633934  585556 start.go:128] duration metric: took 3.421138975s to createHost
	I1027 19:40:17.633966  585556 start.go:83] releasing machines lock for "no-preload-095885", held for 3.421299172s
	I1027 19:40:17.634047  585556 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-095885
	I1027 19:40:17.654043  585556 ssh_runner.go:195] Run: cat /version.json
	I1027 19:40:17.654105  585556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-095885
	I1027 19:40:17.654176  585556 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 19:40:17.654278  585556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-095885
	I1027 19:40:17.674598  585556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/no-preload-095885/id_rsa Username:docker}
	I1027 19:40:17.676747  585556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33440 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/no-preload-095885/id_rsa Username:docker}
	I1027 19:40:17.851543  585556 ssh_runner.go:195] Run: systemctl --version
	I1027 19:40:17.858416  585556 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 19:40:17.901218  585556 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 19:40:17.907418  585556 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 19:40:17.907484  585556 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 19:40:17.939419  585556 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1027 19:40:17.939447  585556 start.go:495] detecting cgroup driver to use...
	I1027 19:40:17.939487  585556 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 19:40:17.939546  585556 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 19:40:17.956670  585556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 19:40:17.969903  585556 docker.go:218] disabling cri-docker service (if available) ...
	I1027 19:40:17.969981  585556 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 19:40:17.987961  585556 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 19:40:18.007593  585556 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 19:40:18.118330  585556 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 19:40:18.215527  585556 docker.go:234] disabling docker service ...
	I1027 19:40:18.215612  585556 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 19:40:18.236606  585556 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 19:40:18.250411  585556 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 19:40:18.354314  585556 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 19:40:18.449154  585556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 19:40:18.462286  585556 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 19:40:18.478159  585556 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 19:40:18.478220  585556 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:40:18.493052  585556 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 19:40:18.493112  585556 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:40:18.507409  585556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:40:18.517585  585556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:40:18.528077  585556 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 19:40:18.536942  585556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:40:18.547367  585556 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:40:18.563962  585556 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:40:18.576027  585556 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 19:40:18.591165  585556 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 19:40:18.606726  585556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:40:18.742390  585556 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 19:40:18.890780  585556 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 19:40:18.890859  585556 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 19:40:18.895899  585556 start.go:563] Will wait 60s for crictl version
	I1027 19:40:18.896073  585556 ssh_runner.go:195] Run: which crictl
	I1027 19:40:18.901219  585556 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 19:40:18.934666  585556 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 19:40:18.934766  585556 ssh_runner.go:195] Run: crio --version
	I1027 19:40:18.972994  585556 ssh_runner.go:195] Run: crio --version
	I1027 19:40:19.006118  585556 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 19:40:18.611678  579549 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:40:18.611696  579549 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 19:40:18.611745  579549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-919237
	I1027 19:40:18.640349  579549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/embed-certs-919237/id_rsa Username:docker}
	I1027 19:40:18.644197  579549 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 19:40:18.644222  579549 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 19:40:18.644292  579549 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-919237
	I1027 19:40:18.676752  579549 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33430 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/embed-certs-919237/id_rsa Username:docker}
	I1027 19:40:18.697711  579549 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 19:40:18.763340  579549 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:40:18.775653  579549 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:40:18.799101  579549 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 19:40:18.914238  579549 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1027 19:40:18.916841  579549 node_ready.go:35] waiting up to 6m0s for node "embed-certs-919237" to be "Ready" ...
	I1027 19:40:19.132801  579549 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1027 19:40:17.872984  584758 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 19:40:17.878930  584758 fix.go:56] duration metric: took 4.859605788s for fixHost
	I1027 19:40:17.878975  584758 start.go:83] releasing machines lock for "old-k8s-version-468959", held for 4.859679885s
	I1027 19:40:17.879074  584758 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-468959
	I1027 19:40:17.901800  584758 ssh_runner.go:195] Run: cat /version.json
	I1027 19:40:17.901849  584758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-468959
	I1027 19:40:17.901890  584758 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 19:40:17.901981  584758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-468959
	I1027 19:40:17.921851  584758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/old-k8s-version-468959/id_rsa Username:docker}
	I1027 19:40:17.922798  584758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/old-k8s-version-468959/id_rsa Username:docker}
	I1027 19:40:18.110837  584758 ssh_runner.go:195] Run: systemctl --version
	I1027 19:40:18.118568  584758 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 19:40:18.162863  584758 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 19:40:18.169595  584758 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 19:40:18.169679  584758 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 19:40:18.178328  584758 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 19:40:18.178351  584758 start.go:495] detecting cgroup driver to use...
	I1027 19:40:18.178382  584758 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 19:40:18.178424  584758 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 19:40:18.193698  584758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 19:40:18.206752  584758 docker.go:218] disabling cri-docker service (if available) ...
	I1027 19:40:18.206813  584758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 19:40:18.223347  584758 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 19:40:18.237500  584758 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 19:40:18.337819  584758 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 19:40:18.428815  584758 docker.go:234] disabling docker service ...
	I1027 19:40:18.428910  584758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 19:40:18.444998  584758 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 19:40:18.458510  584758 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 19:40:18.551188  584758 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 19:40:18.680625  584758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 19:40:18.696458  584758 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 19:40:18.715768  584758 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1027 19:40:18.715837  584758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:40:18.729776  584758 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 19:40:18.729851  584758 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:40:18.742792  584758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:40:18.754608  584758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:40:18.767107  584758 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 19:40:18.781597  584758 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:40:18.795778  584758 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:40:18.808744  584758 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:40:18.822298  584758 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 19:40:18.832677  584758 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 19:40:18.841731  584758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:40:18.960712  584758 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 19:40:19.096508  584758 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 19:40:19.096591  584758 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 19:40:19.101269  584758 start.go:563] Will wait 60s for crictl version
	I1027 19:40:19.101335  584758 ssh_runner.go:195] Run: which crictl
	I1027 19:40:19.106055  584758 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 19:40:19.139066  584758 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 19:40:19.139164  584758 ssh_runner.go:195] Run: crio --version
	I1027 19:40:19.171094  584758 ssh_runner.go:195] Run: crio --version
	I1027 19:40:19.221100  584758 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1027 19:40:19.134258  579549 addons.go:514] duration metric: took 563.773619ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1027 19:40:19.420875  579549 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-919237" context rescaled to 1 replicas
	I1027 19:40:19.222652  584758 cli_runner.go:164] Run: docker network inspect old-k8s-version-468959 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 19:40:19.248528  584758 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1027 19:40:19.253819  584758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 19:40:19.269178  584758 kubeadm.go:883] updating cluster {Name:old-k8s-version-468959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-468959 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 19:40:19.269315  584758 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1027 19:40:19.269370  584758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 19:40:19.314234  584758 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 19:40:19.314265  584758 crio.go:433] Images already preloaded, skipping extraction
	I1027 19:40:19.314326  584758 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 19:40:19.358854  584758 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 19:40:19.358895  584758 cache_images.go:85] Images are preloaded, skipping loading
	I1027 19:40:19.358906  584758 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1027 19:40:19.359047  584758 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-468959 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-468959 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 19:40:19.359347  584758 ssh_runner.go:195] Run: crio config
	I1027 19:40:19.441719  584758 cni.go:84] Creating CNI manager for ""
	I1027 19:40:19.441747  584758 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:40:19.441772  584758 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 19:40:19.441813  584758 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-468959 NodeName:old-k8s-version-468959 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 19:40:19.441994  584758 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-468959"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 19:40:19.442075  584758 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1027 19:40:19.453695  584758 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 19:40:19.453781  584758 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 19:40:19.465523  584758 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1027 19:40:19.483441  584758 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 19:40:19.505754  584758 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1027 19:40:19.529318  584758 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1027 19:40:19.534743  584758 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 19:40:19.548022  584758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:40:19.666505  584758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:40:19.691414  584758 certs.go:69] Setting up /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/old-k8s-version-468959 for IP: 192.168.85.2
	I1027 19:40:19.691444  584758 certs.go:195] generating shared ca certs ...
	I1027 19:40:19.691476  584758 certs.go:227] acquiring lock for ca certs: {Name:mk4bdbca32068f6f817fc35fdc496e961dc3e0d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:40:19.691669  584758 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21801-352833/.minikube/ca.key
	I1027 19:40:19.691755  584758 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.key
	I1027 19:40:19.691777  584758 certs.go:257] generating profile certs ...
	I1027 19:40:19.691952  584758 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/old-k8s-version-468959/client.key
	I1027 19:40:19.692044  584758 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/old-k8s-version-468959/apiserver.key.1a853fdc
	I1027 19:40:19.692100  584758 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/old-k8s-version-468959/proxy-client.key
	I1027 19:40:19.692257  584758 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/356415.pem (1338 bytes)
	W1027 19:40:19.692312  584758 certs.go:480] ignoring /home/jenkins/minikube-integration/21801-352833/.minikube/certs/356415_empty.pem, impossibly tiny 0 bytes
	I1027 19:40:19.692326  584758 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 19:40:19.692361  584758 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem (1078 bytes)
	I1027 19:40:19.692399  584758 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem (1123 bytes)
	I1027 19:40:19.692441  584758 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem (1679 bytes)
	I1027 19:40:19.692511  584758 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem (1708 bytes)
	I1027 19:40:19.693325  584758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 19:40:19.724016  584758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 19:40:19.754944  584758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 19:40:19.782986  584758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1027 19:40:19.815898  584758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/old-k8s-version-468959/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1027 19:40:19.852667  584758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/old-k8s-version-468959/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 19:40:19.891348  584758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/old-k8s-version-468959/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 19:40:19.923757  584758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/old-k8s-version-468959/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1027 19:40:19.952271  584758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem --> /usr/share/ca-certificates/3564152.pem (1708 bytes)
	I1027 19:40:19.976986  584758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 19:40:20.049891  584758 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/certs/356415.pem --> /usr/share/ca-certificates/356415.pem (1338 bytes)
	I1027 19:40:20.073473  584758 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 19:40:20.090180  584758 ssh_runner.go:195] Run: openssl version
	I1027 19:40:20.098113  584758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 19:40:20.108735  584758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:40:20.113885  584758 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:40:20.113965  584758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:40:20.154322  584758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 19:40:20.165038  584758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/356415.pem && ln -fs /usr/share/ca-certificates/356415.pem /etc/ssl/certs/356415.pem"
	I1027 19:40:20.179545  584758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356415.pem
	I1027 19:40:20.185457  584758 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 19:02 /usr/share/ca-certificates/356415.pem
	I1027 19:40:20.185543  584758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356415.pem
	I1027 19:40:20.239865  584758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/356415.pem /etc/ssl/certs/51391683.0"
	I1027 19:40:20.252173  584758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3564152.pem && ln -fs /usr/share/ca-certificates/3564152.pem /etc/ssl/certs/3564152.pem"
	I1027 19:40:20.270426  584758 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3564152.pem
	I1027 19:40:20.277328  584758 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 19:02 /usr/share/ca-certificates/3564152.pem
	I1027 19:40:20.277405  584758 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3564152.pem
	I1027 19:40:20.331716  584758 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3564152.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 19:40:20.343575  584758 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 19:40:20.349796  584758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 19:40:20.410905  584758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 19:40:20.475180  584758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 19:40:20.534759  584758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 19:40:20.591724  584758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 19:40:20.641820  584758 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 19:40:20.700819  584758 kubeadm.go:400] StartCluster: {Name:old-k8s-version-468959 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-468959 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:40:20.700929  584758 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 19:40:20.700994  584758 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 19:40:20.743631  584758 cri.go:89] found id: "bbf4fe7bcb1eef6c19d02157f5f9d45ada6d926195550b86406cb27a478cb520"
	I1027 19:40:20.743659  584758 cri.go:89] found id: "07e72855c00ee996d65390930e95dec1dbf22e238c37a44a46a98ed17c3b0651"
	I1027 19:40:20.743664  584758 cri.go:89] found id: "ef7e54548205b2d8355417aebc97fb016764235b2b1f28d56a8dd8368f3a58d8"
	I1027 19:40:20.743669  584758 cri.go:89] found id: "1415820809db89899722d08ef65bea69fc0e930dddf7cc3246da3d0cf8f8ca35"
	I1027 19:40:20.743673  584758 cri.go:89] found id: ""
	I1027 19:40:20.743725  584758 ssh_runner.go:195] Run: sudo runc list -f json
	W1027 19:40:20.759667  584758 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:40:20Z" level=error msg="open /run/runc: no such file or directory"
	I1027 19:40:20.759773  584758 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 19:40:20.771046  584758 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1027 19:40:20.771071  584758 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1027 19:40:20.771158  584758 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 19:40:20.781846  584758 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 19:40:20.783067  584758 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-468959" does not appear in /home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:40:20.784217  584758 kubeconfig.go:62] /home/jenkins/minikube-integration/21801-352833/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-468959" cluster setting kubeconfig missing "old-k8s-version-468959" context setting]
	I1027 19:40:20.785748  584758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/kubeconfig: {Name:mk24cbe512a6907c874f3fb7a85390a8f9fd2b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:40:20.788399  584758 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 19:40:20.799667  584758 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1027 19:40:20.799710  584758 kubeadm.go:601] duration metric: took 28.632586ms to restartPrimaryControlPlane
	I1027 19:40:20.799727  584758 kubeadm.go:402] duration metric: took 98.913067ms to StartCluster
	I1027 19:40:20.799749  584758 settings.go:142] acquiring lock: {Name:mk8304c2106bf310642e0949fc0266ccb50f2f95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:40:20.799831  584758 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:40:20.802081  584758 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/kubeconfig: {Name:mk24cbe512a6907c874f3fb7a85390a8f9fd2b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:40:20.802462  584758 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 19:40:20.802735  584758 config.go:182] Loaded profile config "old-k8s-version-468959": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1027 19:40:20.802787  584758 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 19:40:20.802882  584758 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-468959"
	I1027 19:40:20.802902  584758 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-468959"
	W1027 19:40:20.802912  584758 addons.go:247] addon storage-provisioner should already be in state true
	I1027 19:40:20.802940  584758 host.go:66] Checking if "old-k8s-version-468959" exists ...
	I1027 19:40:20.803446  584758 addons.go:69] Setting dashboard=true in profile "old-k8s-version-468959"
	I1027 19:40:20.803468  584758 addons.go:238] Setting addon dashboard=true in "old-k8s-version-468959"
	W1027 19:40:20.803476  584758 addons.go:247] addon dashboard should already be in state true
	I1027 19:40:20.803478  584758 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-468959"
	I1027 19:40:20.803506  584758 host.go:66] Checking if "old-k8s-version-468959" exists ...
	I1027 19:40:20.803506  584758 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-468959"
	I1027 19:40:20.803859  584758 cli_runner.go:164] Run: docker container inspect old-k8s-version-468959 --format={{.State.Status}}
	I1027 19:40:20.804202  584758 cli_runner.go:164] Run: docker container inspect old-k8s-version-468959 --format={{.State.Status}}
	I1027 19:40:20.805204  584758 cli_runner.go:164] Run: docker container inspect old-k8s-version-468959 --format={{.State.Status}}
	I1027 19:40:20.806092  584758 out.go:179] * Verifying Kubernetes components...
	I1027 19:40:20.807816  584758 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:40:20.836918  584758 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-468959"
	W1027 19:40:20.836947  584758 addons.go:247] addon default-storageclass should already be in state true
	I1027 19:40:20.836978  584758 host.go:66] Checking if "old-k8s-version-468959" exists ...
	I1027 19:40:20.838927  584758 cli_runner.go:164] Run: docker container inspect old-k8s-version-468959 --format={{.State.Status}}
	I1027 19:40:20.842657  584758 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:40:20.842657  584758 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1027 19:40:20.844254  584758 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:40:20.844311  584758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 19:40:20.844427  584758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-468959
	I1027 19:40:20.844332  584758 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1027 19:40:17.245598  565798 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1027 19:40:17.246083  565798 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1027 19:40:17.246159  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:40:17.246211  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:40:17.277096  565798 cri.go:89] found id: "047d3a4a3e1e5638984a05fc7ebff787c5c5c7f381d978e93c663acb37994b72"
	I1027 19:40:17.277120  565798 cri.go:89] found id: ""
	I1027 19:40:17.277131  565798 logs.go:282] 1 containers: [047d3a4a3e1e5638984a05fc7ebff787c5c5c7f381d978e93c663acb37994b72]
	I1027 19:40:17.277242  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:40:17.281796  565798 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:40:17.281871  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:40:17.314636  565798 cri.go:89] found id: ""
	I1027 19:40:17.314668  565798 logs.go:282] 0 containers: []
	W1027 19:40:17.314680  565798 logs.go:284] No container was found matching "etcd"
	I1027 19:40:17.314688  565798 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:40:17.314769  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:40:17.346318  565798 cri.go:89] found id: ""
	I1027 19:40:17.346350  565798 logs.go:282] 0 containers: []
	W1027 19:40:17.346359  565798 logs.go:284] No container was found matching "coredns"
	I1027 19:40:17.346364  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:40:17.346409  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:40:17.375789  565798 cri.go:89] found id: "15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:40:17.375812  565798 cri.go:89] found id: ""
	I1027 19:40:17.375824  565798 logs.go:282] 1 containers: [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8]
	I1027 19:40:17.375894  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:40:17.380118  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:40:17.380219  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:40:17.409481  565798 cri.go:89] found id: ""
	I1027 19:40:17.409511  565798 logs.go:282] 0 containers: []
	W1027 19:40:17.409521  565798 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:40:17.409529  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:40:17.409594  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:40:17.439440  565798 cri.go:89] found id: "df060ac929bc7a5dac337c7e85e10b2f4a51413be70b8202c8307826c4a72947"
	I1027 19:40:17.439459  565798 cri.go:89] found id: ""
	I1027 19:40:17.439468  565798 logs.go:282] 1 containers: [df060ac929bc7a5dac337c7e85e10b2f4a51413be70b8202c8307826c4a72947]
	I1027 19:40:17.439524  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:40:17.443728  565798 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:40:17.443792  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:40:17.479886  565798 cri.go:89] found id: ""
	I1027 19:40:17.479912  565798 logs.go:282] 0 containers: []
	W1027 19:40:17.479941  565798 logs.go:284] No container was found matching "kindnet"
	I1027 19:40:17.479950  565798 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:40:17.480007  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:40:17.516640  565798 cri.go:89] found id: ""
	I1027 19:40:17.516668  565798 logs.go:282] 0 containers: []
	W1027 19:40:17.516679  565798 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:40:17.516692  565798 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:40:17.516710  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:40:17.572996  565798 logs.go:123] Gathering logs for container status ...
	I1027 19:40:17.573103  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:40:17.610014  565798 logs.go:123] Gathering logs for kubelet ...
	I1027 19:40:17.610047  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:40:17.691973  565798 logs.go:123] Gathering logs for dmesg ...
	I1027 19:40:17.692005  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:40:17.714313  565798 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:40:17.714344  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:40:17.775095  565798 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:40:17.775121  565798 logs.go:123] Gathering logs for kube-apiserver [047d3a4a3e1e5638984a05fc7ebff787c5c5c7f381d978e93c663acb37994b72] ...
	I1027 19:40:17.775152  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 047d3a4a3e1e5638984a05fc7ebff787c5c5c7f381d978e93c663acb37994b72"
	I1027 19:40:17.812854  565798 logs.go:123] Gathering logs for kube-scheduler [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8] ...
	I1027 19:40:17.812902  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:40:17.863031  565798 logs.go:123] Gathering logs for kube-controller-manager [df060ac929bc7a5dac337c7e85e10b2f4a51413be70b8202c8307826c4a72947] ...
	I1027 19:40:17.863086  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 df060ac929bc7a5dac337c7e85e10b2f4a51413be70b8202c8307826c4a72947"
	I1027 19:40:20.393243  565798 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1027 19:40:20.393769  565798 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1027 19:40:20.393831  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:40:20.393888  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:40:20.445756  565798 cri.go:89] found id: "047d3a4a3e1e5638984a05fc7ebff787c5c5c7f381d978e93c663acb37994b72"
	I1027 19:40:20.445789  565798 cri.go:89] found id: ""
	I1027 19:40:20.445800  565798 logs.go:282] 1 containers: [047d3a4a3e1e5638984a05fc7ebff787c5c5c7f381d978e93c663acb37994b72]
	I1027 19:40:20.445868  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:40:20.451788  565798 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:40:20.451977  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:40:20.492149  565798 cri.go:89] found id: ""
	I1027 19:40:20.492185  565798 logs.go:282] 0 containers: []
	W1027 19:40:20.492197  565798 logs.go:284] No container was found matching "etcd"
	I1027 19:40:20.492209  565798 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:40:20.492283  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:40:20.528990  565798 cri.go:89] found id: ""
	I1027 19:40:20.529021  565798 logs.go:282] 0 containers: []
	W1027 19:40:20.529033  565798 logs.go:284] No container was found matching "coredns"
	I1027 19:40:20.529041  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:40:20.529104  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:40:20.570625  565798 cri.go:89] found id: "15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:40:20.570653  565798 cri.go:89] found id: ""
	I1027 19:40:20.570665  565798 logs.go:282] 1 containers: [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8]
	I1027 19:40:20.570802  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:40:20.577167  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:40:20.577334  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:40:20.625279  565798 cri.go:89] found id: ""
	I1027 19:40:20.625307  565798 logs.go:282] 0 containers: []
	W1027 19:40:20.625319  565798 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:40:20.625327  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:40:20.625387  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:40:20.665556  565798 cri.go:89] found id: "df060ac929bc7a5dac337c7e85e10b2f4a51413be70b8202c8307826c4a72947"
	I1027 19:40:20.665590  565798 cri.go:89] found id: ""
	I1027 19:40:20.665602  565798 logs.go:282] 1 containers: [df060ac929bc7a5dac337c7e85e10b2f4a51413be70b8202c8307826c4a72947]
	I1027 19:40:20.665671  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:40:20.670564  565798 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:40:20.670656  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:40:20.708379  565798 cri.go:89] found id: ""
	I1027 19:40:20.708411  565798 logs.go:282] 0 containers: []
	W1027 19:40:20.708422  565798 logs.go:284] No container was found matching "kindnet"
	I1027 19:40:20.708429  565798 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:40:20.708489  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:40:20.750649  565798 cri.go:89] found id: ""
	I1027 19:40:20.750681  565798 logs.go:282] 0 containers: []
	W1027 19:40:20.750692  565798 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:40:20.750705  565798 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:40:20.750722  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:40:20.860559  565798 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:40:20.860664  565798 logs.go:123] Gathering logs for kube-apiserver [047d3a4a3e1e5638984a05fc7ebff787c5c5c7f381d978e93c663acb37994b72] ...
	I1027 19:40:20.860692  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 047d3a4a3e1e5638984a05fc7ebff787c5c5c7f381d978e93c663acb37994b72"
	I1027 19:40:20.932020  565798 logs.go:123] Gathering logs for kube-scheduler [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8] ...
	I1027 19:40:20.932103  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:40:21.013243  565798 logs.go:123] Gathering logs for kube-controller-manager [df060ac929bc7a5dac337c7e85e10b2f4a51413be70b8202c8307826c4a72947] ...
	I1027 19:40:21.013290  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 df060ac929bc7a5dac337c7e85e10b2f4a51413be70b8202c8307826c4a72947"
	I1027 19:40:21.065285  565798 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:40:21.065334  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:40:21.150919  565798 logs.go:123] Gathering logs for container status ...
	I1027 19:40:21.151031  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:40:21.212662  565798 logs.go:123] Gathering logs for kubelet ...
	I1027 19:40:21.212699  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:40:21.335956  565798 logs.go:123] Gathering logs for dmesg ...
	I1027 19:40:21.336058  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:40:20.846615  584758 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1027 19:40:20.846683  584758 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1027 19:40:20.846778  584758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-468959
	I1027 19:40:20.894351  584758 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 19:40:20.897180  584758 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 19:40:20.897221  584758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/old-k8s-version-468959/id_rsa Username:docker}
	I1027 19:40:20.897277  584758 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-468959
	I1027 19:40:20.899751  584758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/old-k8s-version-468959/id_rsa Username:docker}
	I1027 19:40:20.935404  584758 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/old-k8s-version-468959/id_rsa Username:docker}
	I1027 19:40:21.021539  584758 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:40:21.049526  584758 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-468959" to be "Ready" ...
	I1027 19:40:21.052449  584758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:40:21.087589  584758 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1027 19:40:21.087661  584758 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1027 19:40:21.090857  584758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 19:40:21.120226  584758 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1027 19:40:21.120406  584758 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1027 19:40:21.172448  584758 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1027 19:40:21.172479  584758 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1027 19:40:21.211515  584758 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1027 19:40:21.211547  584758 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1027 19:40:21.258450  584758 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1027 19:40:21.258483  584758 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1027 19:40:21.286779  584758 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1027 19:40:21.286811  584758 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1027 19:40:21.309100  584758 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1027 19:40:21.309129  584758 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1027 19:40:21.330763  584758 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1027 19:40:21.330946  584758 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1027 19:40:21.349536  584758 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 19:40:21.349567  584758 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1027 19:40:21.369569  584758 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 19:40:22.782171  584758 node_ready.go:49] node "old-k8s-version-468959" is "Ready"
	I1027 19:40:22.782214  584758 node_ready.go:38] duration metric: took 1.732622832s for node "old-k8s-version-468959" to be "Ready" ...
	I1027 19:40:22.782234  584758 api_server.go:52] waiting for apiserver process to appear ...
	I1027 19:40:22.782297  584758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:40:19.008801  585556 cli_runner.go:164] Run: docker network inspect no-preload-095885 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 19:40:19.029414  585556 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1027 19:40:19.034658  585556 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 19:40:19.047262  585556 kubeadm.go:883] updating cluster {Name:no-preload-095885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-095885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 19:40:19.047412  585556 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:40:19.047449  585556 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 19:40:19.079455  585556 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1027 19:40:19.079486  585556 cache_images.go:89] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1027 19:40:19.079595  585556 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:40:19.079617  585556 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1027 19:40:19.079617  585556 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1027 19:40:19.079652  585556 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1027 19:40:19.079604  585556 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 19:40:19.079755  585556 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1027 19:40:19.079592  585556 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1027 19:40:19.079966  585556 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1027 19:40:19.081211  585556 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1027 19:40:19.081229  585556 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1027 19:40:19.081316  585556 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 19:40:19.081329  585556 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1027 19:40:19.081418  585556 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1027 19:40:19.081530  585556 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1027 19:40:19.081597  585556 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:40:19.081600  585556 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1027 19:40:19.205326  585556 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1027 19:40:19.218795  585556 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 19:40:19.221256  585556 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1027 19:40:19.226307  585556 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1027 19:40:19.244514  585556 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1027 19:40:19.254282  585556 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1027 19:40:19.256322  585556 cache_images.go:117] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1027 19:40:19.256378  585556 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1027 19:40:19.256426  585556 ssh_runner.go:195] Run: which crictl
	I1027 19:40:19.271023  585556 cache_images.go:117] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1027 19:40:19.271078  585556 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 19:40:19.271334  585556 ssh_runner.go:195] Run: which crictl
	I1027 19:40:19.276362  585556 cache_images.go:117] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1027 19:40:19.276422  585556 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1027 19:40:19.276477  585556 ssh_runner.go:195] Run: which crictl
	I1027 19:40:19.277060  585556 cache_images.go:117] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1027 19:40:19.277100  585556 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1027 19:40:19.277159  585556 ssh_runner.go:195] Run: which crictl
	I1027 19:40:19.293292  585556 cache_images.go:117] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1027 19:40:19.293350  585556 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1027 19:40:19.293425  585556 ssh_runner.go:195] Run: which crictl
	I1027 19:40:19.300167  585556 cache_images.go:117] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1027 19:40:19.300222  585556 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1027 19:40:19.300237  585556 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1027 19:40:19.300283  585556 ssh_runner.go:195] Run: which crictl
	I1027 19:40:19.300334  585556 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 19:40:19.300389  585556 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1027 19:40:19.300436  585556 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1027 19:40:19.300458  585556 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1027 19:40:19.340000  585556 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1027 19:40:19.340158  585556 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1027 19:40:19.343709  585556 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1027 19:40:19.344421  585556 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 19:40:19.374123  585556 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1027 19:40:19.387580  585556 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1027 19:40:19.387632  585556 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1027 19:40:19.387639  585556 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1027 19:40:19.387685  585556 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1027 19:40:19.387695  585556 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1027 19:40:19.393413  585556 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1027 19:40:19.437021  585556 cache_images.go:117] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1027 19:40:19.437196  585556 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1027 19:40:19.437264  585556 ssh_runner.go:195] Run: which crictl
	I1027 19:40:19.438531  585556 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1027 19:40:19.447014  585556 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1027 19:40:19.447041  585556 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1027 19:40:19.447051  585556 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1027 19:40:19.447041  585556 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1027 19:40:19.447073  585556 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1027 19:40:19.447099  585556 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1027 19:40:19.447163  585556 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1027 19:40:19.447179  585556 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1027 19:40:19.447282  585556 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1027 19:40:19.473707  585556 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1027 19:40:19.473813  585556 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1027 19:40:19.481707  585556 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1027 19:40:19.481736  585556 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1027 19:40:19.481815  585556 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1027 19:40:19.481822  585556 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1027 19:40:19.482505  585556 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1027 19:40:19.482533  585556 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1027 19:40:19.482556  585556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1027 19:40:19.482581  585556 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1027 19:40:19.482602  585556 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1027 19:40:19.482619  585556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1027 19:40:19.482619  585556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1027 19:40:19.482658  585556 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1027 19:40:19.482674  585556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1027 19:40:19.491813  585556 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1027 19:40:19.491862  585556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1027 19:40:19.495272  585556 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1027 19:40:19.495310  585556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1027 19:40:19.548187  585556 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1027 19:40:19.559284  585556 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:40:19.662547  585556 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1027 19:40:19.662714  585556 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1027 19:40:19.680319  585556 cache_images.go:117] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1027 19:40:19.680389  585556 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:40:19.680472  585556 ssh_runner.go:195] Run: which crictl
	I1027 19:40:19.713963  585556 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1027 19:40:19.714013  585556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1027 19:40:19.723340  585556 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:40:19.788120  585556 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1027 19:40:19.788213  585556 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1027 19:40:19.800206  585556 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:40:20.328333  585556 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1027 19:40:20.328433  585556 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:40:20.328476  585556 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1027 19:40:20.328526  585556 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1027 19:40:20.371670  585556 cache_images.go:290] Loading image from: /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1027 19:40:20.372202  585556 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1027 19:40:21.862160  585556 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.533581061s)
	I1027 19:40:21.862194  585556 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1027 19:40:21.862227  585556 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1027 19:40:21.862292  585556 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1027 19:40:21.862230  585556 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.490000104s)
	I1027 19:40:21.862338  585556 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1027 19:40:21.862382  585556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1027 19:40:23.230416  585556 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.368086557s)
	I1027 19:40:23.230457  585556 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1027 19:40:23.230489  585556 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1027 19:40:23.230543  585556 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1027 19:40:23.563410  584758 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.510915849s)
	I1027 19:40:23.563483  584758 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.472605118s)
	I1027 19:40:24.000248  584758 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.217914633s)
	I1027 19:40:24.000286  584758 api_server.go:72] duration metric: took 3.197783677s to wait for apiserver process to appear ...
	I1027 19:40:24.000293  584758 api_server.go:88] waiting for apiserver healthz status ...
	I1027 19:40:24.000323  584758 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1027 19:40:24.000700  584758 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.630486133s)
	I1027 19:40:24.002308  584758 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-468959 addons enable metrics-server
	
	I1027 19:40:24.003721  584758 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	W1027 19:40:20.924730  579549 node_ready.go:57] node "embed-certs-919237" has "Ready":"False" status (will retry)
	W1027 19:40:23.420690  579549 node_ready.go:57] node "embed-certs-919237" has "Ready":"False" status (will retry)
	I1027 19:40:23.870553  565798 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1027 19:40:23.871028  565798 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1027 19:40:23.871097  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:40:23.871196  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:40:23.927358  565798 cri.go:89] found id: "047d3a4a3e1e5638984a05fc7ebff787c5c5c7f381d978e93c663acb37994b72"
	I1027 19:40:23.927411  565798 cri.go:89] found id: ""
	I1027 19:40:23.927423  565798 logs.go:282] 1 containers: [047d3a4a3e1e5638984a05fc7ebff787c5c5c7f381d978e93c663acb37994b72]
	I1027 19:40:23.927582  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:40:23.933025  565798 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:40:23.933114  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:40:23.982190  565798 cri.go:89] found id: ""
	I1027 19:40:23.982222  565798 logs.go:282] 0 containers: []
	W1027 19:40:23.982233  565798 logs.go:284] No container was found matching "etcd"
	I1027 19:40:23.982241  565798 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:40:23.982301  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:40:24.026244  565798 cri.go:89] found id: ""
	I1027 19:40:24.026293  565798 logs.go:282] 0 containers: []
	W1027 19:40:24.026304  565798 logs.go:284] No container was found matching "coredns"
	I1027 19:40:24.026313  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:40:24.026387  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:40:24.065328  565798 cri.go:89] found id: "15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:40:24.065362  565798 cri.go:89] found id: ""
	I1027 19:40:24.065373  565798 logs.go:282] 1 containers: [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8]
	I1027 19:40:24.065440  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:40:24.071072  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:40:24.071179  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:40:24.105670  565798 cri.go:89] found id: ""
	I1027 19:40:24.105699  565798 logs.go:282] 0 containers: []
	W1027 19:40:24.105710  565798 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:40:24.105719  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:40:24.105784  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:40:24.141551  565798 cri.go:89] found id: "df060ac929bc7a5dac337c7e85e10b2f4a51413be70b8202c8307826c4a72947"
	I1027 19:40:24.141594  565798 cri.go:89] found id: ""
	I1027 19:40:24.141607  565798 logs.go:282] 1 containers: [df060ac929bc7a5dac337c7e85e10b2f4a51413be70b8202c8307826c4a72947]
	I1027 19:40:24.141671  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:40:24.146700  565798 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:40:24.146787  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:40:24.185161  565798 cri.go:89] found id: ""
	I1027 19:40:24.185194  565798 logs.go:282] 0 containers: []
	W1027 19:40:24.185206  565798 logs.go:284] No container was found matching "kindnet"
	I1027 19:40:24.185215  565798 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:40:24.185286  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:40:24.223075  565798 cri.go:89] found id: ""
	I1027 19:40:24.223110  565798 logs.go:282] 0 containers: []
	W1027 19:40:24.223121  565798 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:40:24.223151  565798 logs.go:123] Gathering logs for container status ...
	I1027 19:40:24.223169  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:40:24.264981  565798 logs.go:123] Gathering logs for kubelet ...
	I1027 19:40:24.265019  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:40:24.367130  565798 logs.go:123] Gathering logs for dmesg ...
	I1027 19:40:24.367201  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:40:24.392513  565798 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:40:24.392557  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:40:24.467069  565798 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:40:24.467093  565798 logs.go:123] Gathering logs for kube-apiserver [047d3a4a3e1e5638984a05fc7ebff787c5c5c7f381d978e93c663acb37994b72] ...
	I1027 19:40:24.467110  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 047d3a4a3e1e5638984a05fc7ebff787c5c5c7f381d978e93c663acb37994b72"
	I1027 19:40:24.504702  565798 logs.go:123] Gathering logs for kube-scheduler [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8] ...
	I1027 19:40:24.504735  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:40:24.561360  565798 logs.go:123] Gathering logs for kube-controller-manager [df060ac929bc7a5dac337c7e85e10b2f4a51413be70b8202c8307826c4a72947] ...
	I1027 19:40:24.561399  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 df060ac929bc7a5dac337c7e85e10b2f4a51413be70b8202c8307826c4a72947"
	I1027 19:40:24.596566  565798 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:40:24.596604  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:40:24.004981  584758 addons.go:514] duration metric: took 3.202188222s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1027 19:40:24.009954  584758 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1027 19:40:24.009989  584758 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1027 19:40:24.501303  584758 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1027 19:40:24.506191  584758 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1027 19:40:24.507545  584758 api_server.go:141] control plane version: v1.28.0
	I1027 19:40:24.507596  584758 api_server.go:131] duration metric: took 507.294268ms to wait for apiserver health ...
	I1027 19:40:24.507606  584758 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 19:40:24.513256  584758 system_pods.go:59] 8 kube-system pods found
	I1027 19:40:24.513302  584758 system_pods.go:61] "coredns-5dd5756b68-xwmdt" [788993ae-aeb4-4fff-aaef-b7337405ca99] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 19:40:24.513313  584758 system_pods.go:61] "etcd-old-k8s-version-468959" [1cef07bb-0a18-477e-b01f-73b3af45812a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 19:40:24.513317  584758 system_pods.go:61] "kindnet-td5zb" [c5669cde-bf50-4064-83c2-f5b82b3a2813] Running
	I1027 19:40:24.513324  584758 system_pods.go:61] "kube-apiserver-old-k8s-version-468959" [aa2de03d-dd15-4531-88f9-bd83b90b9144] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 19:40:24.513329  584758 system_pods.go:61] "kube-controller-manager-old-k8s-version-468959" [d7a4a573-90e2-4682-b634-5ad5c864f2af] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 19:40:24.513340  584758 system_pods.go:61] "kube-proxy-tjbth" [834a476e-f5a7-4d1d-b8c6-43c163997c55] Running
	I1027 19:40:24.513346  584758 system_pods.go:61] "kube-scheduler-old-k8s-version-468959" [30033105-5333-4aaa-839e-b13ff4c159d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 19:40:24.513350  584758 system_pods.go:61] "storage-provisioner" [9fbb3702-fce5-44f8-b8ff-f267f9ca147f] Running
	I1027 19:40:24.513357  584758 system_pods.go:74] duration metric: took 5.743503ms to wait for pod list to return data ...
	I1027 19:40:24.513373  584758 default_sa.go:34] waiting for default service account to be created ...
	I1027 19:40:24.515875  584758 default_sa.go:45] found service account: "default"
	I1027 19:40:24.515907  584758 default_sa.go:55] duration metric: took 2.523683ms for default service account to be created ...
	I1027 19:40:24.515920  584758 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 19:40:24.520267  584758 system_pods.go:86] 8 kube-system pods found
	I1027 19:40:24.520309  584758 system_pods.go:89] "coredns-5dd5756b68-xwmdt" [788993ae-aeb4-4fff-aaef-b7337405ca99] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 19:40:24.520321  584758 system_pods.go:89] "etcd-old-k8s-version-468959" [1cef07bb-0a18-477e-b01f-73b3af45812a] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 19:40:24.520329  584758 system_pods.go:89] "kindnet-td5zb" [c5669cde-bf50-4064-83c2-f5b82b3a2813] Running
	I1027 19:40:24.520345  584758 system_pods.go:89] "kube-apiserver-old-k8s-version-468959" [aa2de03d-dd15-4531-88f9-bd83b90b9144] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 19:40:24.520352  584758 system_pods.go:89] "kube-controller-manager-old-k8s-version-468959" [d7a4a573-90e2-4682-b634-5ad5c864f2af] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 19:40:24.520362  584758 system_pods.go:89] "kube-proxy-tjbth" [834a476e-f5a7-4d1d-b8c6-43c163997c55] Running
	I1027 19:40:24.520371  584758 system_pods.go:89] "kube-scheduler-old-k8s-version-468959" [30033105-5333-4aaa-839e-b13ff4c159d4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 19:40:24.520381  584758 system_pods.go:89] "storage-provisioner" [9fbb3702-fce5-44f8-b8ff-f267f9ca147f] Running
	I1027 19:40:24.520391  584758 system_pods.go:126] duration metric: took 4.463192ms to wait for k8s-apps to be running ...
	I1027 19:40:24.520400  584758 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 19:40:24.520561  584758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:40:24.537471  584758 system_svc.go:56] duration metric: took 17.056219ms WaitForService to wait for kubelet
	I1027 19:40:24.537505  584758 kubeadm.go:586] duration metric: took 3.734999842s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 19:40:24.537531  584758 node_conditions.go:102] verifying NodePressure condition ...
	I1027 19:40:24.540498  584758 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1027 19:40:24.540534  584758 node_conditions.go:123] node cpu capacity is 8
	I1027 19:40:24.540550  584758 node_conditions.go:105] duration metric: took 3.013155ms to run NodePressure ...
	I1027 19:40:24.540573  584758 start.go:241] waiting for startup goroutines ...
	I1027 19:40:24.540585  584758 start.go:246] waiting for cluster config update ...
	I1027 19:40:24.540600  584758 start.go:255] writing updated cluster config ...
	I1027 19:40:24.540919  584758 ssh_runner.go:195] Run: rm -f paused
	I1027 19:40:24.545696  584758 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 19:40:24.551410  584758 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-xwmdt" in "kube-system" namespace to be "Ready" or be gone ...
	W1027 19:40:26.557409  584758 pod_ready.go:104] pod "coredns-5dd5756b68-xwmdt" is not "Ready", error: <nil>
	I1027 19:40:24.526469  585556 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.295894649s)
	I1027 19:40:24.526498  585556 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1027 19:40:24.526532  585556 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1027 19:40:24.526582  585556 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1027 19:40:25.980644  585556 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.454032408s)
	I1027 19:40:25.980675  585556 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1027 19:40:25.980701  585556 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1027 19:40:25.980741  585556 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1027 19:40:27.326493  585556 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.345720606s)
	I1027 19:40:27.326531  585556 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1027 19:40:27.326564  585556 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1027 19:40:27.326616  585556 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	W1027 19:40:25.921044  579549 node_ready.go:57] node "embed-certs-919237" has "Ready":"False" status (will retry)
	W1027 19:40:28.420333  579549 node_ready.go:57] node "embed-certs-919237" has "Ready":"False" status (will retry)
	I1027 19:40:27.147481  565798 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1027 19:40:27.148032  565798 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1027 19:40:27.148096  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:40:27.148182  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:40:27.187316  565798 cri.go:89] found id: "047d3a4a3e1e5638984a05fc7ebff787c5c5c7f381d978e93c663acb37994b72"
	I1027 19:40:27.187341  565798 cri.go:89] found id: ""
	I1027 19:40:27.187353  565798 logs.go:282] 1 containers: [047d3a4a3e1e5638984a05fc7ebff787c5c5c7f381d978e93c663acb37994b72]
	I1027 19:40:27.187417  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:40:27.192971  565798 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:40:27.193053  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:40:27.230419  565798 cri.go:89] found id: ""
	I1027 19:40:27.230453  565798 logs.go:282] 0 containers: []
	W1027 19:40:27.230465  565798 logs.go:284] No container was found matching "etcd"
	I1027 19:40:27.230473  565798 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:40:27.230545  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:40:27.266015  565798 cri.go:89] found id: ""
	I1027 19:40:27.266047  565798 logs.go:282] 0 containers: []
	W1027 19:40:27.266058  565798 logs.go:284] No container was found matching "coredns"
	I1027 19:40:27.266067  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:40:27.266145  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:40:27.297376  565798 cri.go:89] found id: "15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:40:27.297404  565798 cri.go:89] found id: ""
	I1027 19:40:27.297417  565798 logs.go:282] 1 containers: [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8]
	I1027 19:40:27.297485  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:40:27.302246  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:40:27.302325  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:40:27.333303  565798 cri.go:89] found id: ""
	I1027 19:40:27.333331  565798 logs.go:282] 0 containers: []
	W1027 19:40:27.333343  565798 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:40:27.333351  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:40:27.333413  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:40:27.363293  565798 cri.go:89] found id: "df060ac929bc7a5dac337c7e85e10b2f4a51413be70b8202c8307826c4a72947"
	I1027 19:40:27.363320  565798 cri.go:89] found id: ""
	I1027 19:40:27.363332  565798 logs.go:282] 1 containers: [df060ac929bc7a5dac337c7e85e10b2f4a51413be70b8202c8307826c4a72947]
	I1027 19:40:27.363407  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:40:27.367979  565798 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:40:27.368052  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:40:27.397351  565798 cri.go:89] found id: ""
	I1027 19:40:27.397380  565798 logs.go:282] 0 containers: []
	W1027 19:40:27.397388  565798 logs.go:284] No container was found matching "kindnet"
	I1027 19:40:27.397395  565798 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:40:27.397457  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:40:27.429876  565798 cri.go:89] found id: ""
	I1027 19:40:27.429900  565798 logs.go:282] 0 containers: []
	W1027 19:40:27.429908  565798 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:40:27.429920  565798 logs.go:123] Gathering logs for kubelet ...
	I1027 19:40:27.429932  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:40:27.518118  565798 logs.go:123] Gathering logs for dmesg ...
	I1027 19:40:27.518168  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:40:27.542761  565798 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:40:27.542802  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:40:27.607181  565798 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:40:27.607206  565798 logs.go:123] Gathering logs for kube-apiserver [047d3a4a3e1e5638984a05fc7ebff787c5c5c7f381d978e93c663acb37994b72] ...
	I1027 19:40:27.607222  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 047d3a4a3e1e5638984a05fc7ebff787c5c5c7f381d978e93c663acb37994b72"
	I1027 19:40:27.642557  565798 logs.go:123] Gathering logs for kube-scheduler [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8] ...
	I1027 19:40:27.642595  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:40:27.694553  565798 logs.go:123] Gathering logs for kube-controller-manager [df060ac929bc7a5dac337c7e85e10b2f4a51413be70b8202c8307826c4a72947] ...
	I1027 19:40:27.694596  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 df060ac929bc7a5dac337c7e85e10b2f4a51413be70b8202c8307826c4a72947"
	I1027 19:40:27.723727  565798 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:40:27.723757  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:40:27.771212  565798 logs.go:123] Gathering logs for container status ...
	I1027 19:40:27.771259  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:40:30.308267  565798 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1027 19:40:30.419839  579549 node_ready.go:49] node "embed-certs-919237" is "Ready"
	I1027 19:40:30.419876  579549 node_ready.go:38] duration metric: took 11.5030023s for node "embed-certs-919237" to be "Ready" ...
	I1027 19:40:30.419898  579549 api_server.go:52] waiting for apiserver process to appear ...
	I1027 19:40:30.419992  579549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:40:30.437659  579549 api_server.go:72] duration metric: took 11.867224602s to wait for apiserver process to appear ...
	I1027 19:40:30.437688  579549 api_server.go:88] waiting for apiserver healthz status ...
	I1027 19:40:30.437709  579549 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1027 19:40:30.444959  579549 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1027 19:40:30.446254  579549 api_server.go:141] control plane version: v1.34.1
	I1027 19:40:30.446291  579549 api_server.go:131] duration metric: took 8.593435ms to wait for apiserver health ...
	I1027 19:40:30.446304  579549 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 19:40:30.450907  579549 system_pods.go:59] 8 kube-system pods found
	I1027 19:40:30.450958  579549 system_pods.go:61] "coredns-66bc5c9577-9b9tz" [1f7cb1a7-6c91-4e4d-aecc-baaaa8f9bf22] Pending
	I1027 19:40:30.450969  579549 system_pods.go:61] "etcd-embed-certs-919237" [b995a0ef-722f-4183-aefb-e86d11f084b1] Running
	I1027 19:40:30.450974  579549 system_pods.go:61] "kindnet-6jx4q" [f346911c-5e04-4721-b4d8-c330f1629136] Running
	I1027 19:40:30.450979  579549 system_pods.go:61] "kube-apiserver-embed-certs-919237" [3a7050fe-4cb1-4d64-ad98-6cccb2f1581b] Running
	I1027 19:40:30.450992  579549 system_pods.go:61] "kube-controller-manager-embed-certs-919237" [0a466515-69f1-4023-b8ea-dac3554f8746] Running
	I1027 19:40:30.450997  579549 system_pods.go:61] "kube-proxy-rrq2h" [afd63d93-c691-44d9-aa8e-73e522ea9369] Running
	I1027 19:40:30.451004  579549 system_pods.go:61] "kube-scheduler-embed-certs-919237" [c89fed17-fc68-4bc6-8cfd-9a213ca6a68c] Running
	I1027 19:40:30.451017  579549 system_pods.go:61] "storage-provisioner" [a73b7a4c-44bb-443e-af42-78c83e6b6852] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 19:40:30.451031  579549 system_pods.go:74] duration metric: took 4.719808ms to wait for pod list to return data ...
	I1027 19:40:30.451045  579549 default_sa.go:34] waiting for default service account to be created ...
	I1027 19:40:30.453801  579549 default_sa.go:45] found service account: "default"
	I1027 19:40:30.453829  579549 default_sa.go:55] duration metric: took 2.776208ms for default service account to be created ...
	I1027 19:40:30.453842  579549 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 19:40:30.457126  579549 system_pods.go:86] 8 kube-system pods found
	I1027 19:40:30.457173  579549 system_pods.go:89] "coredns-66bc5c9577-9b9tz" [1f7cb1a7-6c91-4e4d-aecc-baaaa8f9bf22] Pending
	I1027 19:40:30.457180  579549 system_pods.go:89] "etcd-embed-certs-919237" [b995a0ef-722f-4183-aefb-e86d11f084b1] Running
	I1027 19:40:30.457186  579549 system_pods.go:89] "kindnet-6jx4q" [f346911c-5e04-4721-b4d8-c330f1629136] Running
	I1027 19:40:30.457191  579549 system_pods.go:89] "kube-apiserver-embed-certs-919237" [3a7050fe-4cb1-4d64-ad98-6cccb2f1581b] Running
	I1027 19:40:30.457197  579549 system_pods.go:89] "kube-controller-manager-embed-certs-919237" [0a466515-69f1-4023-b8ea-dac3554f8746] Running
	I1027 19:40:30.457201  579549 system_pods.go:89] "kube-proxy-rrq2h" [afd63d93-c691-44d9-aa8e-73e522ea9369] Running
	I1027 19:40:30.457206  579549 system_pods.go:89] "kube-scheduler-embed-certs-919237" [c89fed17-fc68-4bc6-8cfd-9a213ca6a68c] Running
	I1027 19:40:30.457214  579549 system_pods.go:89] "storage-provisioner" [a73b7a4c-44bb-443e-af42-78c83e6b6852] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 19:40:30.457254  579549 retry.go:31] will retry after 208.02205ms: missing components: kube-dns
	I1027 19:40:30.669531  579549 system_pods.go:86] 8 kube-system pods found
	I1027 19:40:30.669581  579549 system_pods.go:89] "coredns-66bc5c9577-9b9tz" [1f7cb1a7-6c91-4e4d-aecc-baaaa8f9bf22] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 19:40:30.669588  579549 system_pods.go:89] "etcd-embed-certs-919237" [b995a0ef-722f-4183-aefb-e86d11f084b1] Running
	I1027 19:40:30.669600  579549 system_pods.go:89] "kindnet-6jx4q" [f346911c-5e04-4721-b4d8-c330f1629136] Running
	I1027 19:40:30.669606  579549 system_pods.go:89] "kube-apiserver-embed-certs-919237" [3a7050fe-4cb1-4d64-ad98-6cccb2f1581b] Running
	I1027 19:40:30.669611  579549 system_pods.go:89] "kube-controller-manager-embed-certs-919237" [0a466515-69f1-4023-b8ea-dac3554f8746] Running
	I1027 19:40:30.669616  579549 system_pods.go:89] "kube-proxy-rrq2h" [afd63d93-c691-44d9-aa8e-73e522ea9369] Running
	I1027 19:40:30.669620  579549 system_pods.go:89] "kube-scheduler-embed-certs-919237" [c89fed17-fc68-4bc6-8cfd-9a213ca6a68c] Running
	I1027 19:40:30.669627  579549 system_pods.go:89] "storage-provisioner" [a73b7a4c-44bb-443e-af42-78c83e6b6852] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 19:40:30.669650  579549 retry.go:31] will retry after 310.960964ms: missing components: kube-dns
	I1027 19:40:30.985820  579549 system_pods.go:86] 8 kube-system pods found
	I1027 19:40:30.985859  579549 system_pods.go:89] "coredns-66bc5c9577-9b9tz" [1f7cb1a7-6c91-4e4d-aecc-baaaa8f9bf22] Running
	I1027 19:40:30.985866  579549 system_pods.go:89] "etcd-embed-certs-919237" [b995a0ef-722f-4183-aefb-e86d11f084b1] Running
	I1027 19:40:30.985871  579549 system_pods.go:89] "kindnet-6jx4q" [f346911c-5e04-4721-b4d8-c330f1629136] Running
	I1027 19:40:30.985876  579549 system_pods.go:89] "kube-apiserver-embed-certs-919237" [3a7050fe-4cb1-4d64-ad98-6cccb2f1581b] Running
	I1027 19:40:30.985881  579549 system_pods.go:89] "kube-controller-manager-embed-certs-919237" [0a466515-69f1-4023-b8ea-dac3554f8746] Running
	I1027 19:40:30.985885  579549 system_pods.go:89] "kube-proxy-rrq2h" [afd63d93-c691-44d9-aa8e-73e522ea9369] Running
	I1027 19:40:30.985890  579549 system_pods.go:89] "kube-scheduler-embed-certs-919237" [c89fed17-fc68-4bc6-8cfd-9a213ca6a68c] Running
	I1027 19:40:30.985895  579549 system_pods.go:89] "storage-provisioner" [a73b7a4c-44bb-443e-af42-78c83e6b6852] Running
	I1027 19:40:30.985907  579549 system_pods.go:126] duration metric: took 532.056734ms to wait for k8s-apps to be running ...
	I1027 19:40:30.985918  579549 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 19:40:30.985983  579549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:40:31.000039  579549 system_svc.go:56] duration metric: took 14.099925ms WaitForService to wait for kubelet
	I1027 19:40:31.000088  579549 kubeadm.go:586] duration metric: took 12.429656685s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 19:40:31.000115  579549 node_conditions.go:102] verifying NodePressure condition ...
	I1027 19:40:31.003320  579549 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1027 19:40:31.003355  579549 node_conditions.go:123] node cpu capacity is 8
	I1027 19:40:31.003371  579549 node_conditions.go:105] duration metric: took 3.250417ms to run NodePressure ...
	I1027 19:40:31.003385  579549 start.go:241] waiting for startup goroutines ...
	I1027 19:40:31.003394  579549 start.go:246] waiting for cluster config update ...
	I1027 19:40:31.003409  579549 start.go:255] writing updated cluster config ...
	I1027 19:40:31.003733  579549 ssh_runner.go:195] Run: rm -f paused
	I1027 19:40:31.008396  579549 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 19:40:31.013384  579549 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9b9tz" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:40:31.020409  579549 pod_ready.go:94] pod "coredns-66bc5c9577-9b9tz" is "Ready"
	I1027 19:40:31.020435  579549 pod_ready.go:86] duration metric: took 7.02408ms for pod "coredns-66bc5c9577-9b9tz" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:40:31.022659  579549 pod_ready.go:83] waiting for pod "etcd-embed-certs-919237" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:40:31.026940  579549 pod_ready.go:94] pod "etcd-embed-certs-919237" is "Ready"
	I1027 19:40:31.026979  579549 pod_ready.go:86] duration metric: took 4.299298ms for pod "etcd-embed-certs-919237" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:40:31.029101  579549 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-919237" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:40:31.032993  579549 pod_ready.go:94] pod "kube-apiserver-embed-certs-919237" is "Ready"
	I1027 19:40:31.033017  579549 pod_ready.go:86] duration metric: took 3.896894ms for pod "kube-apiserver-embed-certs-919237" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:40:31.034992  579549 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-919237" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:40:31.413695  579549 pod_ready.go:94] pod "kube-controller-manager-embed-certs-919237" is "Ready"
	I1027 19:40:31.413730  579549 pod_ready.go:86] duration metric: took 378.711455ms for pod "kube-controller-manager-embed-certs-919237" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:40:31.613940  579549 pod_ready.go:83] waiting for pod "kube-proxy-rrq2h" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:40:32.014075  579549 pod_ready.go:94] pod "kube-proxy-rrq2h" is "Ready"
	I1027 19:40:32.014108  579549 pod_ready.go:86] duration metric: took 400.139288ms for pod "kube-proxy-rrq2h" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:40:32.214258  579549 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-919237" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:40:32.613558  579549 pod_ready.go:94] pod "kube-scheduler-embed-certs-919237" is "Ready"
	I1027 19:40:32.613600  579549 pod_ready.go:86] duration metric: took 399.315156ms for pod "kube-scheduler-embed-certs-919237" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:40:32.613613  579549 pod_ready.go:40] duration metric: took 1.605184242s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 19:40:32.663060  579549 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 19:40:32.746503  579549 out.go:179] * Done! kubectl is now configured to use "embed-certs-919237" cluster and "default" namespace by default
	W1027 19:40:28.558528  584758 pod_ready.go:104] pod "coredns-5dd5756b68-xwmdt" is not "Ready", error: <nil>
	W1027 19:40:30.558638  584758 pod_ready.go:104] pod "coredns-5dd5756b68-xwmdt" is not "Ready", error: <nil>
	I1027 19:40:30.679548  585556 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.352899563s)
	I1027 19:40:30.679585  585556 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1027 19:40:30.679616  585556 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1027 19:40:30.679668  585556 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1027 19:40:31.296226  585556 cache_images.go:322] Transferred and loaded /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1027 19:40:31.296272  585556 cache_images.go:124] Successfully loaded all cached images
	I1027 19:40:31.296278  585556 cache_images.go:93] duration metric: took 12.216774492s to LoadCachedImages
	I1027 19:40:31.296291  585556 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1027 19:40:31.296404  585556 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-095885 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-095885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 19:40:31.296472  585556 ssh_runner.go:195] Run: crio config
	I1027 19:40:31.343968  585556 cni.go:84] Creating CNI manager for ""
	I1027 19:40:31.343996  585556 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:40:31.344018  585556 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 19:40:31.344041  585556 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-095885 NodeName:no-preload-095885 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 19:40:31.344209  585556 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-095885"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 19:40:31.344326  585556 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 19:40:31.354077  585556 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1027 19:40:31.354177  585556 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1027 19:40:31.363475  585556 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1027 19:40:31.363530  585556 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21801-352833/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1027 19:40:31.363581  585556 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1027 19:40:31.363654  585556 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21801-352833/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1027 19:40:31.369921  585556 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1027 19:40:31.369967  585556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1027 19:40:32.089280  585556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:40:32.103601  585556 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1027 19:40:32.108409  585556 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1027 19:40:32.108457  585556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1027 19:40:32.195128  585556 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1027 19:40:32.201428  585556 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1027 19:40:32.201470  585556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1027 19:40:32.461030  585556 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 19:40:32.470347  585556 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1027 19:40:32.485196  585556 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 19:40:32.545307  585556 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1027 19:40:32.562207  585556 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1027 19:40:32.566911  585556 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 19:40:32.609249  585556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:40:32.741976  585556 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:40:32.767173  585556 certs.go:69] Setting up /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885 for IP: 192.168.76.2
	I1027 19:40:32.767202  585556 certs.go:195] generating shared ca certs ...
	I1027 19:40:32.767221  585556 certs.go:227] acquiring lock for ca certs: {Name:mk4bdbca32068f6f817fc35fdc496e961dc3e0d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:40:32.767376  585556 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21801-352833/.minikube/ca.key
	I1027 19:40:32.767415  585556 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.key
	I1027 19:40:32.767426  585556 certs.go:257] generating profile certs ...
	I1027 19:40:32.767483  585556 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/client.key
	I1027 19:40:32.767498  585556 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/client.crt with IP's: []
	I1027 19:40:32.924733  585556 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/client.crt ...
	I1027 19:40:32.924764  585556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/client.crt: {Name:mkd1757aae88a59ee6bfc1ce7123260b7a6aa15e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:40:32.924983  585556 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/client.key ...
	I1027 19:40:32.924995  585556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/client.key: {Name:mk91eb5bf0dbfea4fbc838be2ba15ccdda47cec1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:40:32.925130  585556 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/apiserver.key.e3f5f1b4
	I1027 19:40:32.925165  585556 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/apiserver.crt.e3f5f1b4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1027 19:40:33.217371  585556 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/apiserver.crt.e3f5f1b4 ...
	I1027 19:40:33.217406  585556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/apiserver.crt.e3f5f1b4: {Name:mk920b015190212ac77ed4f4e813621ebcb641c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:40:33.217599  585556 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/apiserver.key.e3f5f1b4 ...
	I1027 19:40:33.217613  585556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/apiserver.key.e3f5f1b4: {Name:mkbd8eca880b66853961c47b28a6ea1b8c86f9b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:40:33.217688  585556 certs.go:382] copying /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/apiserver.crt.e3f5f1b4 -> /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/apiserver.crt
	I1027 19:40:33.217772  585556 certs.go:386] copying /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/apiserver.key.e3f5f1b4 -> /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/apiserver.key
	I1027 19:40:33.217829  585556 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/proxy-client.key
	I1027 19:40:33.217845  585556 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/proxy-client.crt with IP's: []
	I1027 19:40:33.323636  585556 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/proxy-client.crt ...
	I1027 19:40:33.323670  585556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/proxy-client.crt: {Name:mk34ebcfa6ac5e9fd562bf49402486b94ffd9736 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:40:33.323862  585556 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/proxy-client.key ...
	I1027 19:40:33.323882  585556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/proxy-client.key: {Name:mk23512600f7eeeb13fce9c30d420ff19354ebc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:40:33.324082  585556 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/356415.pem (1338 bytes)
	W1027 19:40:33.324146  585556 certs.go:480] ignoring /home/jenkins/minikube-integration/21801-352833/.minikube/certs/356415_empty.pem, impossibly tiny 0 bytes
	I1027 19:40:33.324163  585556 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 19:40:33.324200  585556 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem (1078 bytes)
	I1027 19:40:33.324229  585556 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem (1123 bytes)
	I1027 19:40:33.324266  585556 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem (1679 bytes)
	I1027 19:40:33.324314  585556 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem (1708 bytes)
	I1027 19:40:33.324933  585556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 19:40:33.347862  585556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 19:40:33.369457  585556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 19:40:33.389439  585556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1027 19:40:33.409064  585556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1027 19:40:33.429347  585556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 19:40:33.449108  585556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 19:40:33.470870  585556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 19:40:33.491573  585556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/certs/356415.pem --> /usr/share/ca-certificates/356415.pem (1338 bytes)
	I1027 19:40:33.513507  585556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem --> /usr/share/ca-certificates/3564152.pem (1708 bytes)
	I1027 19:40:33.533324  585556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 19:40:33.553888  585556 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 19:40:33.569712  585556 ssh_runner.go:195] Run: openssl version
	I1027 19:40:33.577201  585556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/356415.pem && ln -fs /usr/share/ca-certificates/356415.pem /etc/ssl/certs/356415.pem"
	I1027 19:40:33.587556  585556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356415.pem
	I1027 19:40:33.592426  585556 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 19:02 /usr/share/ca-certificates/356415.pem
	I1027 19:40:33.592504  585556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356415.pem
	I1027 19:40:33.629622  585556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/356415.pem /etc/ssl/certs/51391683.0"
	I1027 19:40:33.640780  585556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3564152.pem && ln -fs /usr/share/ca-certificates/3564152.pem /etc/ssl/certs/3564152.pem"
	I1027 19:40:33.651254  585556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3564152.pem
	I1027 19:40:33.656144  585556 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 19:02 /usr/share/ca-certificates/3564152.pem
	I1027 19:40:33.656219  585556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3564152.pem
	I1027 19:40:33.692614  585556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3564152.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 19:40:33.702830  585556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 19:40:33.714057  585556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:40:33.719116  585556 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:40:33.719198  585556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:40:33.756645  585556 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 19:40:33.767158  585556 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 19:40:33.771933  585556 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 19:40:33.771996  585556 kubeadm.go:400] StartCluster: {Name:no-preload-095885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-095885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:40:33.772091  585556 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 19:40:33.772159  585556 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 19:40:33.803379  585556 cri.go:89] found id: ""
	I1027 19:40:33.803441  585556 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 19:40:33.813654  585556 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 19:40:33.823557  585556 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1027 19:40:33.823638  585556 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 19:40:33.832786  585556 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 19:40:33.832807  585556 kubeadm.go:157] found existing configuration files:
	
	I1027 19:40:33.832848  585556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 19:40:33.842894  585556 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 19:40:33.842967  585556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 19:40:33.852896  585556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 19:40:33.862249  585556 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 19:40:33.862312  585556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 19:40:33.871462  585556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 19:40:33.880118  585556 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 19:40:33.880191  585556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 19:40:33.888776  585556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 19:40:33.897118  585556 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 19:40:33.897178  585556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 19:40:33.905570  585556 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 19:40:33.949477  585556 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1027 19:40:33.949573  585556 kubeadm.go:318] [preflight] Running pre-flight checks
	I1027 19:40:33.972681  585556 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1027 19:40:33.972761  585556 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1027 19:40:33.972836  585556 kubeadm.go:318] OS: Linux
	I1027 19:40:33.972923  585556 kubeadm.go:318] CGROUPS_CPU: enabled
	I1027 19:40:33.972971  585556 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1027 19:40:33.973045  585556 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1027 19:40:33.973107  585556 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1027 19:40:33.973178  585556 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1027 19:40:33.973221  585556 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1027 19:40:33.973263  585556 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1027 19:40:33.973302  585556 kubeadm.go:318] CGROUPS_IO: enabled
	I1027 19:40:34.036474  585556 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 19:40:34.036670  585556 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 19:40:34.036843  585556 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 19:40:34.052641  585556 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 19:40:35.309960  565798 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1027 19:40:35.310025  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:40:35.310109  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:40:35.339880  565798 cri.go:89] found id: "f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:40:35.339908  565798 cri.go:89] found id: "047d3a4a3e1e5638984a05fc7ebff787c5c5c7f381d978e93c663acb37994b72"
	I1027 19:40:35.339913  565798 cri.go:89] found id: ""
	I1027 19:40:35.339923  565798 logs.go:282] 2 containers: [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8 047d3a4a3e1e5638984a05fc7ebff787c5c5c7f381d978e93c663acb37994b72]
	I1027 19:40:35.339990  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:40:35.344173  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:40:35.348142  565798 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:40:35.348213  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:40:35.379352  565798 cri.go:89] found id: ""
	I1027 19:40:35.379383  565798 logs.go:282] 0 containers: []
	W1027 19:40:35.379394  565798 logs.go:284] No container was found matching "etcd"
	I1027 19:40:35.379403  565798 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:40:35.379467  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:40:35.408518  565798 cri.go:89] found id: ""
	I1027 19:40:35.408550  565798 logs.go:282] 0 containers: []
	W1027 19:40:35.408562  565798 logs.go:284] No container was found matching "coredns"
	I1027 19:40:35.408570  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:40:35.408635  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:40:35.439365  565798 cri.go:89] found id: "15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:40:35.439395  565798 cri.go:89] found id: ""
	I1027 19:40:35.439406  565798 logs.go:282] 1 containers: [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8]
	I1027 19:40:35.439468  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:40:35.444166  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:40:35.444245  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:40:35.477567  565798 cri.go:89] found id: ""
	I1027 19:40:35.477598  565798 logs.go:282] 0 containers: []
	W1027 19:40:35.477618  565798 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:40:35.477626  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:40:35.477695  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:40:35.518175  565798 cri.go:89] found id: "df060ac929bc7a5dac337c7e85e10b2f4a51413be70b8202c8307826c4a72947"
	I1027 19:40:35.518203  565798 cri.go:89] found id: ""
	I1027 19:40:35.518212  565798 logs.go:282] 1 containers: [df060ac929bc7a5dac337c7e85e10b2f4a51413be70b8202c8307826c4a72947]
	I1027 19:40:35.518275  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:40:35.523708  565798 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:40:35.523783  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:40:35.551235  565798 cri.go:89] found id: ""
	I1027 19:40:35.551264  565798 logs.go:282] 0 containers: []
	W1027 19:40:35.551275  565798 logs.go:284] No container was found matching "kindnet"
	I1027 19:40:35.551283  565798 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:40:35.551344  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:40:35.586087  565798 cri.go:89] found id: ""
	I1027 19:40:35.586113  565798 logs.go:282] 0 containers: []
	W1027 19:40:35.586121  565798 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:40:35.586173  565798 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:40:35.586188  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1027 19:40:34.054737  585556 out.go:252]   - Generating certificates and keys ...
	I1027 19:40:34.054855  585556 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1027 19:40:34.054947  585556 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1027 19:40:34.296517  585556 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 19:40:34.789709  585556 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1027 19:40:35.096562  585556 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1027 19:40:35.206871  585556 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1027 19:40:35.299444  585556 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1027 19:40:35.299641  585556 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-095885] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1027 19:40:35.464838  585556 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1027 19:40:35.465041  585556 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-095885] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1027 19:40:35.612642  585556 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 19:40:35.700437  585556 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 19:40:36.048539  585556 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1027 19:40:36.048753  585556 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 19:40:36.119570  585556 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 19:40:36.277344  585556 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 19:40:36.522924  585556 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 19:40:36.928340  585556 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 19:40:37.477115  585556 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 19:40:37.477871  585556 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 19:40:37.484224  585556 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1027 19:40:33.058247  584758 pod_ready.go:104] pod "coredns-5dd5756b68-xwmdt" is not "Ready", error: <nil>
	W1027 19:40:35.558489  584758 pod_ready.go:104] pod "coredns-5dd5756b68-xwmdt" is not "Ready", error: <nil>
	W1027 19:40:37.559164  584758 pod_ready.go:104] pod "coredns-5dd5756b68-xwmdt" is not "Ready", error: <nil>
	I1027 19:40:37.485638  585556 out.go:252]   - Booting up control plane ...
	I1027 19:40:37.485768  585556 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 19:40:37.485905  585556 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 19:40:37.486748  585556 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 19:40:37.502677  585556 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 19:40:37.502827  585556 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 19:40:37.510494  585556 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 19:40:37.510768  585556 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 19:40:37.510878  585556 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1027 19:40:37.648215  585556 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 19:40:37.648381  585556 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	
	
	==> CRI-O <==
	Oct 27 19:40:30 embed-certs-919237 crio[783]: time="2025-10-27T19:40:30.806738663Z" level=info msg="Starting container: 6dd6ffc28ec4ac677b1c4567e0f4b6f64ec681289672b92c31a1db9bfc0edaa1" id=55ba492e-fd0f-4068-a691-4e1c9e492762 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:40:30 embed-certs-919237 crio[783]: time="2025-10-27T19:40:30.80912259Z" level=info msg="Started container" PID=1818 containerID=6dd6ffc28ec4ac677b1c4567e0f4b6f64ec681289672b92c31a1db9bfc0edaa1 description=kube-system/coredns-66bc5c9577-9b9tz/coredns id=55ba492e-fd0f-4068-a691-4e1c9e492762 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d2f7274dde0a5ea5d3b64029141de7cf34cb1dedbb0dceea4d76943b8978dddc
	Oct 27 19:40:33 embed-certs-919237 crio[783]: time="2025-10-27T19:40:33.313944099Z" level=info msg="Running pod sandbox: default/busybox/POD" id=0cfb56ee-0778-410d-a4c2-2f17775e7360 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 19:40:33 embed-certs-919237 crio[783]: time="2025-10-27T19:40:33.314055866Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:40:33 embed-certs-919237 crio[783]: time="2025-10-27T19:40:33.319162218Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:5ba838b46098fdeeb61f7378b4309f292d1bd3f6b361fe488107b05a2f8a8d3b UID:ec9e6b8d-f937-4aee-b9b9-0131d28f83a9 NetNS:/var/run/netns/ea4ba347-1cfc-487e-85ad-8db78566aadf Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008aff0}] Aliases:map[]}"
	Oct 27 19:40:33 embed-certs-919237 crio[783]: time="2025-10-27T19:40:33.319202842Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 27 19:40:33 embed-certs-919237 crio[783]: time="2025-10-27T19:40:33.329944619Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:5ba838b46098fdeeb61f7378b4309f292d1bd3f6b361fe488107b05a2f8a8d3b UID:ec9e6b8d-f937-4aee-b9b9-0131d28f83a9 NetNS:/var/run/netns/ea4ba347-1cfc-487e-85ad-8db78566aadf Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008aff0}] Aliases:map[]}"
	Oct 27 19:40:33 embed-certs-919237 crio[783]: time="2025-10-27T19:40:33.330078249Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 27 19:40:33 embed-certs-919237 crio[783]: time="2025-10-27T19:40:33.330890964Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 27 19:40:33 embed-certs-919237 crio[783]: time="2025-10-27T19:40:33.331818479Z" level=info msg="Ran pod sandbox 5ba838b46098fdeeb61f7378b4309f292d1bd3f6b361fe488107b05a2f8a8d3b with infra container: default/busybox/POD" id=0cfb56ee-0778-410d-a4c2-2f17775e7360 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 19:40:33 embed-certs-919237 crio[783]: time="2025-10-27T19:40:33.333218481Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=80348aea-b919-4e53-aa0d-22bf95002df5 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:40:33 embed-certs-919237 crio[783]: time="2025-10-27T19:40:33.333403903Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=80348aea-b919-4e53-aa0d-22bf95002df5 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:40:33 embed-certs-919237 crio[783]: time="2025-10-27T19:40:33.333461158Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=80348aea-b919-4e53-aa0d-22bf95002df5 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:40:33 embed-certs-919237 crio[783]: time="2025-10-27T19:40:33.334292111Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=70b5a0cd-79b9-4fd9-b43b-68751a614c92 name=/runtime.v1.ImageService/PullImage
	Oct 27 19:40:33 embed-certs-919237 crio[783]: time="2025-10-27T19:40:33.336293936Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 27 19:40:34 embed-certs-919237 crio[783]: time="2025-10-27T19:40:34.356593973Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=70b5a0cd-79b9-4fd9-b43b-68751a614c92 name=/runtime.v1.ImageService/PullImage
	Oct 27 19:40:34 embed-certs-919237 crio[783]: time="2025-10-27T19:40:34.357476071Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=968506e8-b649-4e62-a78f-e9b341d470f0 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:40:34 embed-certs-919237 crio[783]: time="2025-10-27T19:40:34.358864091Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f9b1f369-fe3c-4374-948a-5a4fc70e5959 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:40:34 embed-certs-919237 crio[783]: time="2025-10-27T19:40:34.362495212Z" level=info msg="Creating container: default/busybox/busybox" id=0fb9b6d2-5547-4673-bddd-1fe01c08fe86 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:40:34 embed-certs-919237 crio[783]: time="2025-10-27T19:40:34.362689411Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:40:34 embed-certs-919237 crio[783]: time="2025-10-27T19:40:34.366485618Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:40:34 embed-certs-919237 crio[783]: time="2025-10-27T19:40:34.366998256Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:40:34 embed-certs-919237 crio[783]: time="2025-10-27T19:40:34.395759741Z" level=info msg="Created container b1f733c0848faef97b729e95bb76c281dae47715e87ce308cedd0f1734b42743: default/busybox/busybox" id=0fb9b6d2-5547-4673-bddd-1fe01c08fe86 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:40:34 embed-certs-919237 crio[783]: time="2025-10-27T19:40:34.396551713Z" level=info msg="Starting container: b1f733c0848faef97b729e95bb76c281dae47715e87ce308cedd0f1734b42743" id=c34031a5-f1bc-4052-861a-73a7e7366395 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:40:34 embed-certs-919237 crio[783]: time="2025-10-27T19:40:34.398627915Z" level=info msg="Started container" PID=1896 containerID=b1f733c0848faef97b729e95bb76c281dae47715e87ce308cedd0f1734b42743 description=default/busybox/busybox id=c34031a5-f1bc-4052-861a-73a7e7366395 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5ba838b46098fdeeb61f7378b4309f292d1bd3f6b361fe488107b05a2f8a8d3b
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	b1f733c0848fa       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   5ba838b46098f       busybox                                      default
	6dd6ffc28ec4a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   d2f7274dde0a5       coredns-66bc5c9577-9b9tz                     kube-system
	c1b4602471585       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   ddd29152b0993       storage-provisioner                          kube-system
	01664ea124334       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   e3e1d12726d10       kindnet-6jx4q                                kube-system
	0543933a4a2e1       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      23 seconds ago      Running             kube-proxy                0                   897268160e3f3       kube-proxy-rrq2h                             kube-system
	d149a3ca482c5       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      33 seconds ago      Running             kube-scheduler            0                   76ae58fb71711       kube-scheduler-embed-certs-919237            kube-system
	59232dd1cf29f       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      33 seconds ago      Running             kube-apiserver            0                   b21fd8619e657       kube-apiserver-embed-certs-919237            kube-system
	8a155ec47c356       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      33 seconds ago      Running             etcd                      0                   f505f1932d6d3       etcd-embed-certs-919237                      kube-system
	6bb23da600afd       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      33 seconds ago      Running             kube-controller-manager   0                   e3917083ea46e       kube-controller-manager-embed-certs-919237   kube-system
	
	
	==> coredns [6dd6ffc28ec4ac677b1c4567e0f4b6f64ec681289672b92c31a1db9bfc0edaa1] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42840 - 1545 "HINFO IN 2257768398936304894.2458004723145398620. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.466420197s
	
	
	==> describe nodes <==
	Name:               embed-certs-919237
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-919237
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=embed-certs-919237
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T19_40_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 19:40:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-919237
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 19:40:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 19:40:30 +0000   Mon, 27 Oct 2025 19:40:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 19:40:30 +0000   Mon, 27 Oct 2025 19:40:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 19:40:30 +0000   Mon, 27 Oct 2025 19:40:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 19:40:30 +0000   Mon, 27 Oct 2025 19:40:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-919237
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                2eeadcca-8dc6-4ff3-aae9-45c8a87361ee
	  Boot ID:                    811bd29c-e64e-4acc-9427-bab1f7caed93
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-9b9tz                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-embed-certs-919237                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-6jx4q                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-embed-certs-919237             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-embed-certs-919237    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-rrq2h                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-embed-certs-919237             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node embed-certs-919237 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node embed-certs-919237 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node embed-certs-919237 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node embed-certs-919237 event: Registered Node embed-certs-919237 in Controller
	  Normal  NodeReady                13s   kubelet          Node embed-certs-919237 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 23 52 43 9a ba 08 06
	[  +0.000398] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 50 95 0e df 53 08 06
	[Oct27 18:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.017295] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023893] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023882] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023851] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +2.047849] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +4.031592] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +8.319143] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[ +16.382183] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[Oct27 19:00] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	
	
	==> etcd [8a155ec47c35650bfbf07f2143c3b363a647a73f79da4bef6601b158c0582f5c] <==
	{"level":"warn","ts":"2025-10-27T19:40:10.442121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:10.449782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:10.457154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:10.464152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:10.472280Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:10.481129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:10.489188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:10.496778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:10.506298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:10.514332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:10.522258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:10.528843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:10.536465Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:10.543459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:10.551391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:10.558489Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:10.565931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:10.572661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:10.579283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:10.586331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:10.600994Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:10.607182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33500","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:10.613188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:10.662820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33564","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-27T19:40:32.851406Z","caller":"traceutil/trace.go:172","msg":"trace[1195073017] transaction","detail":"{read_only:false; response_revision:415; number_of_response:1; }","duration":"104.682328ms","start":"2025-10-27T19:40:32.746700Z","end":"2025-10-27T19:40:32.851382Z","steps":["trace[1195073017] 'process raft request'  (duration: 104.399834ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:40:43 up  2:23,  0 user,  load average: 3.50, 3.23, 2.03
	Linux embed-certs-919237 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [01664ea1243342a5fd10f77aafcce770072329e29cbbe88900aa5b85d1a7c7ff] <==
	I1027 19:40:19.862928       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 19:40:19.865054       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1027 19:40:19.866086       1 main.go:148] setting mtu 1500 for CNI 
	I1027 19:40:19.866115       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 19:40:19.866169       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T19:40:20Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 19:40:20.067675       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 19:40:20.067704       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 19:40:20.067718       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 19:40:20.077276       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1027 19:40:20.376985       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 19:40:20.377337       1 metrics.go:72] Registering metrics
	I1027 19:40:20.378691       1 controller.go:711] "Syncing nftables rules"
	I1027 19:40:30.067239       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1027 19:40:30.067289       1 main.go:301] handling current node
	I1027 19:40:40.071222       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1027 19:40:40.071261       1 main.go:301] handling current node
	
	
	==> kube-apiserver [59232dd1cf29f004ffc8e8ad02d3f79951d79afa989108eb3414497e14924ce8] <==
	I1027 19:40:11.225211       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1027 19:40:11.227301       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1027 19:40:11.227338       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1027 19:40:11.231235       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1027 19:40:11.231684       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 19:40:11.235278       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 19:40:11.235383       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 19:40:11.406093       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 19:40:12.120570       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1027 19:40:12.124435       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1027 19:40:12.124454       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 19:40:12.617874       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 19:40:12.662805       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 19:40:12.715015       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1027 19:40:12.721896       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1027 19:40:12.722985       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 19:40:12.728492       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 19:40:13.121953       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 19:40:13.951740       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 19:40:13.969167       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1027 19:40:13.983736       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1027 19:40:18.930050       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 19:40:19.173474       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1027 19:40:19.230006       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 19:40:19.235156       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [6bb23da600afdfeaaddad6c9fcfefe251ee1cee9ac976ea585a017baa0cc6859] <==
	I1027 19:40:18.120622       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 19:40:18.120712       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1027 19:40:18.121501       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1027 19:40:18.121509       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 19:40:18.121558       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 19:40:18.121712       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1027 19:40:18.122039       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 19:40:18.122103       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 19:40:18.122627       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 19:40:18.123796       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 19:40:18.125009       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1027 19:40:18.125126       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1027 19:40:18.126198       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1027 19:40:18.129459       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1027 19:40:18.131748       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1027 19:40:18.131753       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 19:40:18.131830       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1027 19:40:18.131889       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1027 19:40:18.131900       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1027 19:40:18.131907       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1027 19:40:18.133934       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1027 19:40:18.139492       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-919237" podCIDRs=["10.244.0.0/24"]
	I1027 19:40:18.139506       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 19:40:18.160616       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 19:40:33.088532       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [0543933a4a2e1637dab60ccd2e6482777784d9688221fcd4323f0090a23d1643] <==
	I1027 19:40:19.647730       1 server_linux.go:53] "Using iptables proxy"
	I1027 19:40:19.714636       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 19:40:19.816010       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 19:40:19.816052       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1027 19:40:19.816172       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 19:40:19.856906       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 19:40:19.856982       1 server_linux.go:132] "Using iptables Proxier"
	I1027 19:40:19.865717       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 19:40:19.866094       1 server.go:527] "Version info" version="v1.34.1"
	I1027 19:40:19.866119       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:40:19.868254       1 config.go:200] "Starting service config controller"
	I1027 19:40:19.868268       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 19:40:19.868292       1 config.go:106] "Starting endpoint slice config controller"
	I1027 19:40:19.868297       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 19:40:19.868312       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 19:40:19.868318       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 19:40:19.870341       1 config.go:309] "Starting node config controller"
	I1027 19:40:19.870428       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 19:40:19.968416       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 19:40:19.968510       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 19:40:19.968558       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 19:40:19.970903       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [d149a3ca482c54a44505422078d322fae2172392605b35816bf6a65310b3dfde] <==
	E1027 19:40:11.164753       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 19:40:11.164861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1027 19:40:11.164869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 19:40:11.165036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 19:40:11.165069       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 19:40:11.165181       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 19:40:11.165583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 19:40:11.165599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 19:40:11.166044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 19:40:11.166061       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 19:40:11.166174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 19:40:11.166327       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 19:40:11.166319       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 19:40:12.035626       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1027 19:40:12.108033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 19:40:12.118262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 19:40:12.128517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 19:40:12.135809       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 19:40:12.164841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1027 19:40:12.231109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 19:40:12.294671       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 19:40:12.299760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 19:40:12.306888       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 19:40:12.407008       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1027 19:40:14.660498       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 19:40:14 embed-certs-919237 kubelet[1313]: I1027 19:40:14.890417    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-919237" podStartSLOduration=1.8903978380000002 podStartE2EDuration="1.890397838s" podCreationTimestamp="2025-10-27 19:40:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:40:14.880354381 +0000 UTC m=+1.152861767" watchObservedRunningTime="2025-10-27 19:40:14.890397838 +0000 UTC m=+1.162905298"
	Oct 27 19:40:14 embed-certs-919237 kubelet[1313]: I1027 19:40:14.902520    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-919237" podStartSLOduration=2.9024859640000003 podStartE2EDuration="2.902485964s" podCreationTimestamp="2025-10-27 19:40:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:40:14.890322479 +0000 UTC m=+1.162829860" watchObservedRunningTime="2025-10-27 19:40:14.902485964 +0000 UTC m=+1.174993351"
	Oct 27 19:40:14 embed-certs-919237 kubelet[1313]: I1027 19:40:14.915646    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-919237" podStartSLOduration=1.915629244 podStartE2EDuration="1.915629244s" podCreationTimestamp="2025-10-27 19:40:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:40:14.91559764 +0000 UTC m=+1.188105025" watchObservedRunningTime="2025-10-27 19:40:14.915629244 +0000 UTC m=+1.188136609"
	Oct 27 19:40:14 embed-certs-919237 kubelet[1313]: I1027 19:40:14.915822    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-919237" podStartSLOduration=1.91581235 podStartE2EDuration="1.91581235s" podCreationTimestamp="2025-10-27 19:40:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:40:14.902647472 +0000 UTC m=+1.175155036" watchObservedRunningTime="2025-10-27 19:40:14.91581235 +0000 UTC m=+1.188319735"
	Oct 27 19:40:18 embed-certs-919237 kubelet[1313]: I1027 19:40:18.187387    1313 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 27 19:40:18 embed-certs-919237 kubelet[1313]: I1027 19:40:18.188148    1313 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 27 19:40:19 embed-certs-919237 kubelet[1313]: I1027 19:40:19.248106    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f346911c-5e04-4721-b4d8-c330f1629136-cni-cfg\") pod \"kindnet-6jx4q\" (UID: \"f346911c-5e04-4721-b4d8-c330f1629136\") " pod="kube-system/kindnet-6jx4q"
	Oct 27 19:40:19 embed-certs-919237 kubelet[1313]: I1027 19:40:19.248289    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f346911c-5e04-4721-b4d8-c330f1629136-xtables-lock\") pod \"kindnet-6jx4q\" (UID: \"f346911c-5e04-4721-b4d8-c330f1629136\") " pod="kube-system/kindnet-6jx4q"
	Oct 27 19:40:19 embed-certs-919237 kubelet[1313]: I1027 19:40:19.248328    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f346911c-5e04-4721-b4d8-c330f1629136-lib-modules\") pod \"kindnet-6jx4q\" (UID: \"f346911c-5e04-4721-b4d8-c330f1629136\") " pod="kube-system/kindnet-6jx4q"
	Oct 27 19:40:19 embed-certs-919237 kubelet[1313]: I1027 19:40:19.248420    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/afd63d93-c691-44d9-aa8e-73e522ea9369-lib-modules\") pod \"kube-proxy-rrq2h\" (UID: \"afd63d93-c691-44d9-aa8e-73e522ea9369\") " pod="kube-system/kube-proxy-rrq2h"
	Oct 27 19:40:19 embed-certs-919237 kubelet[1313]: I1027 19:40:19.248470    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmlk2\" (UniqueName: \"kubernetes.io/projected/afd63d93-c691-44d9-aa8e-73e522ea9369-kube-api-access-rmlk2\") pod \"kube-proxy-rrq2h\" (UID: \"afd63d93-c691-44d9-aa8e-73e522ea9369\") " pod="kube-system/kube-proxy-rrq2h"
	Oct 27 19:40:19 embed-certs-919237 kubelet[1313]: I1027 19:40:19.248495    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdnwb\" (UniqueName: \"kubernetes.io/projected/f346911c-5e04-4721-b4d8-c330f1629136-kube-api-access-gdnwb\") pod \"kindnet-6jx4q\" (UID: \"f346911c-5e04-4721-b4d8-c330f1629136\") " pod="kube-system/kindnet-6jx4q"
	Oct 27 19:40:19 embed-certs-919237 kubelet[1313]: I1027 19:40:19.248546    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/afd63d93-c691-44d9-aa8e-73e522ea9369-kube-proxy\") pod \"kube-proxy-rrq2h\" (UID: \"afd63d93-c691-44d9-aa8e-73e522ea9369\") " pod="kube-system/kube-proxy-rrq2h"
	Oct 27 19:40:19 embed-certs-919237 kubelet[1313]: I1027 19:40:19.248582    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/afd63d93-c691-44d9-aa8e-73e522ea9369-xtables-lock\") pod \"kube-proxy-rrq2h\" (UID: \"afd63d93-c691-44d9-aa8e-73e522ea9369\") " pod="kube-system/kube-proxy-rrq2h"
	Oct 27 19:40:19 embed-certs-919237 kubelet[1313]: I1027 19:40:19.892010    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-6jx4q" podStartSLOduration=0.891985254 podStartE2EDuration="891.985254ms" podCreationTimestamp="2025-10-27 19:40:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:40:19.891058838 +0000 UTC m=+6.163566226" watchObservedRunningTime="2025-10-27 19:40:19.891985254 +0000 UTC m=+6.164492640"
	Oct 27 19:40:19 embed-certs-919237 kubelet[1313]: I1027 19:40:19.908211    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rrq2h" podStartSLOduration=0.90818379 podStartE2EDuration="908.18379ms" podCreationTimestamp="2025-10-27 19:40:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:40:19.907910113 +0000 UTC m=+6.180417493" watchObservedRunningTime="2025-10-27 19:40:19.90818379 +0000 UTC m=+6.180691177"
	Oct 27 19:40:30 embed-certs-919237 kubelet[1313]: I1027 19:40:30.402472    1313 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 27 19:40:30 embed-certs-919237 kubelet[1313]: I1027 19:40:30.526992    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1f7cb1a7-6c91-4e4d-aecc-baaaa8f9bf22-config-volume\") pod \"coredns-66bc5c9577-9b9tz\" (UID: \"1f7cb1a7-6c91-4e4d-aecc-baaaa8f9bf22\") " pod="kube-system/coredns-66bc5c9577-9b9tz"
	Oct 27 19:40:30 embed-certs-919237 kubelet[1313]: I1027 19:40:30.527069    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb6kd\" (UniqueName: \"kubernetes.io/projected/1f7cb1a7-6c91-4e4d-aecc-baaaa8f9bf22-kube-api-access-nb6kd\") pod \"coredns-66bc5c9577-9b9tz\" (UID: \"1f7cb1a7-6c91-4e4d-aecc-baaaa8f9bf22\") " pod="kube-system/coredns-66bc5c9577-9b9tz"
	Oct 27 19:40:30 embed-certs-919237 kubelet[1313]: I1027 19:40:30.527105    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a73b7a4c-44bb-443e-af42-78c83e6b6852-tmp\") pod \"storage-provisioner\" (UID: \"a73b7a4c-44bb-443e-af42-78c83e6b6852\") " pod="kube-system/storage-provisioner"
	Oct 27 19:40:30 embed-certs-919237 kubelet[1313]: I1027 19:40:30.527172    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgkq8\" (UniqueName: \"kubernetes.io/projected/a73b7a4c-44bb-443e-af42-78c83e6b6852-kube-api-access-lgkq8\") pod \"storage-provisioner\" (UID: \"a73b7a4c-44bb-443e-af42-78c83e6b6852\") " pod="kube-system/storage-provisioner"
	Oct 27 19:40:30 embed-certs-919237 kubelet[1313]: I1027 19:40:30.918375    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-9b9tz" podStartSLOduration=11.918351999 podStartE2EDuration="11.918351999s" podCreationTimestamp="2025-10-27 19:40:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:40:30.918307702 +0000 UTC m=+17.190815087" watchObservedRunningTime="2025-10-27 19:40:30.918351999 +0000 UTC m=+17.190859385"
	Oct 27 19:40:30 embed-certs-919237 kubelet[1313]: I1027 19:40:30.930043    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.930019581 podStartE2EDuration="11.930019581s" podCreationTimestamp="2025-10-27 19:40:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:40:30.929910619 +0000 UTC m=+17.202418004" watchObservedRunningTime="2025-10-27 19:40:30.930019581 +0000 UTC m=+17.202526970"
	Oct 27 19:40:33 embed-certs-919237 kubelet[1313]: I1027 19:40:33.044328    1313 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbqlb\" (UniqueName: \"kubernetes.io/projected/ec9e6b8d-f937-4aee-b9b9-0131d28f83a9-kube-api-access-dbqlb\") pod \"busybox\" (UID: \"ec9e6b8d-f937-4aee-b9b9-0131d28f83a9\") " pod="default/busybox"
	Oct 27 19:40:34 embed-certs-919237 kubelet[1313]: I1027 19:40:34.932787    1313 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.908300862 podStartE2EDuration="2.932767999s" podCreationTimestamp="2025-10-27 19:40:32 +0000 UTC" firstStartedPulling="2025-10-27 19:40:33.333823441 +0000 UTC m=+19.606330805" lastFinishedPulling="2025-10-27 19:40:34.358290573 +0000 UTC m=+20.630797942" observedRunningTime="2025-10-27 19:40:34.932635543 +0000 UTC m=+21.205142931" watchObservedRunningTime="2025-10-27 19:40:34.932767999 +0000 UTC m=+21.205275387"
	
	
	==> storage-provisioner [c1b46024715852175ea489c45b2ce9796b61f39c17823708ff8e9cd75a554fb4] <==
	I1027 19:40:30.814955       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1027 19:40:30.824455       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 19:40:30.824531       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1027 19:40:30.827286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:40:30.834524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 19:40:30.834756       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 19:40:30.834946       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-919237_6e4ecac9-15b3-480b-94bb-290a7fa0fdac!
	I1027 19:40:30.835210       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ea57f8f9-31a7-4033-9918-213289abc41f", APIVersion:"v1", ResourceVersion:"403", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-919237_6e4ecac9-15b3-480b-94bb-290a7fa0fdac became leader
	W1027 19:40:30.839642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:40:30.843404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 19:40:30.935397       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-919237_6e4ecac9-15b3-480b-94bb-290a7fa0fdac!
	W1027 19:40:32.852890       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:40:32.857586       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:40:34.860942       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:40:34.865360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:40:36.869279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:40:36.874244       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:40:38.878447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:40:38.883358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:40:40.888445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:40:40.897309       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:40:42.902345       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:40:42.907469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-919237 -n embed-certs-919237
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-919237 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (7.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-468959 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-468959 --alsologtostderr -v=1: exit status 80 (2.34446221s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-468959 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:41:12.318462  597688 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:41:12.318778  597688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:41:12.318792  597688 out.go:374] Setting ErrFile to fd 2...
	I1027 19:41:12.318797  597688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:41:12.319047  597688 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 19:41:12.319444  597688 out.go:368] Setting JSON to false
	I1027 19:41:12.319517  597688 mustload.go:65] Loading cluster: old-k8s-version-468959
	I1027 19:41:12.319917  597688 config.go:182] Loaded profile config "old-k8s-version-468959": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1027 19:41:12.320591  597688 cli_runner.go:164] Run: docker container inspect old-k8s-version-468959 --format={{.State.Status}}
	I1027 19:41:12.339807  597688 host.go:66] Checking if "old-k8s-version-468959" exists ...
	I1027 19:41:12.340170  597688 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:41:12.401881  597688 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-27 19:41:12.391440919 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:41:12.402636  597688 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-468959 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1027 19:41:12.404788  597688 out.go:179] * Pausing node old-k8s-version-468959 ... 
	I1027 19:41:12.406647  597688 host.go:66] Checking if "old-k8s-version-468959" exists ...
	I1027 19:41:12.406970  597688 ssh_runner.go:195] Run: systemctl --version
	I1027 19:41:12.407011  597688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-468959
	I1027 19:41:12.426874  597688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33435 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/old-k8s-version-468959/id_rsa Username:docker}
	I1027 19:41:12.529040  597688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:41:12.542350  597688 pause.go:52] kubelet running: true
	I1027 19:41:12.542415  597688 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 19:41:12.712313  597688 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 19:41:12.712408  597688 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 19:41:12.783816  597688 cri.go:89] found id: "c3d66e2dd322da5d8554d09ea3b176065c6fe4ba6f6c1b0ca6612474fc69cd91"
	I1027 19:41:12.783841  597688 cri.go:89] found id: "2e436d82f10c9ab337c97fc80696a734a66eb15691f23ff94fdd4ad91ff89df5"
	I1027 19:41:12.783847  597688 cri.go:89] found id: "32ab77e9658d711ddb17ba898beed6884dc70565b485a14e92a38be93a33d1da"
	I1027 19:41:12.783851  597688 cri.go:89] found id: "2f249517b99aca10f8d7cbf2e67e155472a7f47554aaf0bd3f1fe9dc0c41d3f7"
	I1027 19:41:12.783855  597688 cri.go:89] found id: "b928c935db3996d4e2c0bd1959759b9d8b29154925458393549fc24c4cf387fb"
	I1027 19:41:12.783860  597688 cri.go:89] found id: "bbf4fe7bcb1eef6c19d02157f5f9d45ada6d926195550b86406cb27a478cb520"
	I1027 19:41:12.783863  597688 cri.go:89] found id: "07e72855c00ee996d65390930e95dec1dbf22e238c37a44a46a98ed17c3b0651"
	I1027 19:41:12.783867  597688 cri.go:89] found id: "ef7e54548205b2d8355417aebc97fb016764235b2b1f28d56a8dd8368f3a58d8"
	I1027 19:41:12.783870  597688 cri.go:89] found id: "1415820809db89899722d08ef65bea69fc0e930dddf7cc3246da3d0cf8f8ca35"
	I1027 19:41:12.783879  597688 cri.go:89] found id: "f90740a0e28b478c1a0658aadb18b23d89ba64b844c2ab857f4e83834b57f69b"
	I1027 19:41:12.783883  597688 cri.go:89] found id: "12d4f512371d8f5ce0f213cf3965c8a627febbdcc48831c69b8f3313bbdf87af"
	I1027 19:41:12.783886  597688 cri.go:89] found id: ""
	I1027 19:41:12.783935  597688 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:41:12.796948  597688 retry.go:31] will retry after 157.338878ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:41:12Z" level=error msg="open /run/runc: no such file or directory"
	I1027 19:41:12.955370  597688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:41:12.969230  597688 pause.go:52] kubelet running: false
	I1027 19:41:12.969295  597688 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 19:41:13.114855  597688 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 19:41:13.114947  597688 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 19:41:13.186761  597688 cri.go:89] found id: "c3d66e2dd322da5d8554d09ea3b176065c6fe4ba6f6c1b0ca6612474fc69cd91"
	I1027 19:41:13.186785  597688 cri.go:89] found id: "2e436d82f10c9ab337c97fc80696a734a66eb15691f23ff94fdd4ad91ff89df5"
	I1027 19:41:13.186789  597688 cri.go:89] found id: "32ab77e9658d711ddb17ba898beed6884dc70565b485a14e92a38be93a33d1da"
	I1027 19:41:13.186792  597688 cri.go:89] found id: "2f249517b99aca10f8d7cbf2e67e155472a7f47554aaf0bd3f1fe9dc0c41d3f7"
	I1027 19:41:13.186795  597688 cri.go:89] found id: "b928c935db3996d4e2c0bd1959759b9d8b29154925458393549fc24c4cf387fb"
	I1027 19:41:13.186798  597688 cri.go:89] found id: "bbf4fe7bcb1eef6c19d02157f5f9d45ada6d926195550b86406cb27a478cb520"
	I1027 19:41:13.186801  597688 cri.go:89] found id: "07e72855c00ee996d65390930e95dec1dbf22e238c37a44a46a98ed17c3b0651"
	I1027 19:41:13.186803  597688 cri.go:89] found id: "ef7e54548205b2d8355417aebc97fb016764235b2b1f28d56a8dd8368f3a58d8"
	I1027 19:41:13.186807  597688 cri.go:89] found id: "1415820809db89899722d08ef65bea69fc0e930dddf7cc3246da3d0cf8f8ca35"
	I1027 19:41:13.186825  597688 cri.go:89] found id: "f90740a0e28b478c1a0658aadb18b23d89ba64b844c2ab857f4e83834b57f69b"
	I1027 19:41:13.186834  597688 cri.go:89] found id: "12d4f512371d8f5ce0f213cf3965c8a627febbdcc48831c69b8f3313bbdf87af"
	I1027 19:41:13.186839  597688 cri.go:89] found id: ""
	I1027 19:41:13.186880  597688 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:41:13.198924  597688 retry.go:31] will retry after 343.309148ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:41:13Z" level=error msg="open /run/runc: no such file or directory"
	I1027 19:41:13.542453  597688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:41:13.568055  597688 pause.go:52] kubelet running: false
	I1027 19:41:13.568122  597688 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 19:41:13.746331  597688 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 19:41:13.746418  597688 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 19:41:13.831782  597688 cri.go:89] found id: "c3d66e2dd322da5d8554d09ea3b176065c6fe4ba6f6c1b0ca6612474fc69cd91"
	I1027 19:41:13.831823  597688 cri.go:89] found id: "2e436d82f10c9ab337c97fc80696a734a66eb15691f23ff94fdd4ad91ff89df5"
	I1027 19:41:13.831829  597688 cri.go:89] found id: "32ab77e9658d711ddb17ba898beed6884dc70565b485a14e92a38be93a33d1da"
	I1027 19:41:13.831833  597688 cri.go:89] found id: "2f249517b99aca10f8d7cbf2e67e155472a7f47554aaf0bd3f1fe9dc0c41d3f7"
	I1027 19:41:13.831837  597688 cri.go:89] found id: "b928c935db3996d4e2c0bd1959759b9d8b29154925458393549fc24c4cf387fb"
	I1027 19:41:13.831841  597688 cri.go:89] found id: "bbf4fe7bcb1eef6c19d02157f5f9d45ada6d926195550b86406cb27a478cb520"
	I1027 19:41:13.831845  597688 cri.go:89] found id: "07e72855c00ee996d65390930e95dec1dbf22e238c37a44a46a98ed17c3b0651"
	I1027 19:41:13.831848  597688 cri.go:89] found id: "ef7e54548205b2d8355417aebc97fb016764235b2b1f28d56a8dd8368f3a58d8"
	I1027 19:41:13.831852  597688 cri.go:89] found id: "1415820809db89899722d08ef65bea69fc0e930dddf7cc3246da3d0cf8f8ca35"
	I1027 19:41:13.831860  597688 cri.go:89] found id: "f90740a0e28b478c1a0658aadb18b23d89ba64b844c2ab857f4e83834b57f69b"
	I1027 19:41:13.831864  597688 cri.go:89] found id: "12d4f512371d8f5ce0f213cf3965c8a627febbdcc48831c69b8f3313bbdf87af"
	I1027 19:41:13.831869  597688 cri.go:89] found id: ""
	I1027 19:41:13.831922  597688 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:41:13.847069  597688 retry.go:31] will retry after 406.158117ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:41:13Z" level=error msg="open /run/runc: no such file or directory"
	I1027 19:41:14.254315  597688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:41:14.270726  597688 pause.go:52] kubelet running: false
	I1027 19:41:14.270799  597688 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 19:41:14.467714  597688 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 19:41:14.467803  597688 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 19:41:14.558170  597688 cri.go:89] found id: "c3d66e2dd322da5d8554d09ea3b176065c6fe4ba6f6c1b0ca6612474fc69cd91"
	I1027 19:41:14.558198  597688 cri.go:89] found id: "2e436d82f10c9ab337c97fc80696a734a66eb15691f23ff94fdd4ad91ff89df5"
	I1027 19:41:14.558203  597688 cri.go:89] found id: "32ab77e9658d711ddb17ba898beed6884dc70565b485a14e92a38be93a33d1da"
	I1027 19:41:14.558206  597688 cri.go:89] found id: "2f249517b99aca10f8d7cbf2e67e155472a7f47554aaf0bd3f1fe9dc0c41d3f7"
	I1027 19:41:14.558208  597688 cri.go:89] found id: "b928c935db3996d4e2c0bd1959759b9d8b29154925458393549fc24c4cf387fb"
	I1027 19:41:14.558211  597688 cri.go:89] found id: "bbf4fe7bcb1eef6c19d02157f5f9d45ada6d926195550b86406cb27a478cb520"
	I1027 19:41:14.558214  597688 cri.go:89] found id: "07e72855c00ee996d65390930e95dec1dbf22e238c37a44a46a98ed17c3b0651"
	I1027 19:41:14.558216  597688 cri.go:89] found id: "ef7e54548205b2d8355417aebc97fb016764235b2b1f28d56a8dd8368f3a58d8"
	I1027 19:41:14.558218  597688 cri.go:89] found id: "1415820809db89899722d08ef65bea69fc0e930dddf7cc3246da3d0cf8f8ca35"
	I1027 19:41:14.558237  597688 cri.go:89] found id: "f90740a0e28b478c1a0658aadb18b23d89ba64b844c2ab857f4e83834b57f69b"
	I1027 19:41:14.558241  597688 cri.go:89] found id: "12d4f512371d8f5ce0f213cf3965c8a627febbdcc48831c69b8f3313bbdf87af"
	I1027 19:41:14.558245  597688 cri.go:89] found id: ""
	I1027 19:41:14.558400  597688 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:41:14.581188  597688 out.go:203] 
	W1027 19:41:14.582735  597688 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:41:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:41:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 19:41:14.582764  597688 out.go:285] * 
	* 
	W1027 19:41:14.590821  597688 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 19:41:14.593202  597688 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-468959 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-468959
helpers_test.go:243: (dbg) docker inspect old-k8s-version-468959:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2e0353db62d926cc83bef0d3fa107c768d6d452b830c383908ae17268301278e",
	        "Created": "2025-10-27T19:38:59.515462878Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 585024,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T19:40:13.071868777Z",
	            "FinishedAt": "2025-10-27T19:40:12.058504283Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/2e0353db62d926cc83bef0d3fa107c768d6d452b830c383908ae17268301278e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2e0353db62d926cc83bef0d3fa107c768d6d452b830c383908ae17268301278e/hostname",
	        "HostsPath": "/var/lib/docker/containers/2e0353db62d926cc83bef0d3fa107c768d6d452b830c383908ae17268301278e/hosts",
	        "LogPath": "/var/lib/docker/containers/2e0353db62d926cc83bef0d3fa107c768d6d452b830c383908ae17268301278e/2e0353db62d926cc83bef0d3fa107c768d6d452b830c383908ae17268301278e-json.log",
	        "Name": "/old-k8s-version-468959",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-468959:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-468959",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2e0353db62d926cc83bef0d3fa107c768d6d452b830c383908ae17268301278e",
	                "LowerDir": "/var/lib/docker/overlay2/ce8ba90743d105752eb907923a1422d963b8a7959aac8ff55c461d4eb853b209-init/diff:/var/lib/docker/overlay2/71b61ec94610a35f2d924dec358052d4c154c36b3fe219802f60246ca2dc7f45/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ce8ba90743d105752eb907923a1422d963b8a7959aac8ff55c461d4eb853b209/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ce8ba90743d105752eb907923a1422d963b8a7959aac8ff55c461d4eb853b209/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ce8ba90743d105752eb907923a1422d963b8a7959aac8ff55c461d4eb853b209/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-468959",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-468959/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-468959",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-468959",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-468959",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5d361cca06cc890a42668988ef8b50ed4dbf136e7bb39c84b11dd19440fb41b0",
	            "SandboxKey": "/var/run/docker/netns/5d361cca06cc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-468959": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:5e:a2:03:69:13",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0308d3f30614fde66189d573d65372f0d31056c699858ced2c5f17d155a2bb0c",
	                    "EndpointID": "e64542148d7f9afba07e099a8877475585ce3c508de9b014647a749f24271a36",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-468959",
	                        "2e0353db62d9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-468959 -n old-k8s-version-468959
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-468959 -n old-k8s-version-468959: exit status 2 (461.464701ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-468959 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-468959 logs -n 25: (1.557610339s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                              ARGS                                                                               │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount     │ -p functional-051715 --kill=true                                                                                                                                │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │                     │
	│ ssh       │ functional-051715 ssh echo hello                                                                                                                                │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ ssh       │ functional-051715 ssh cat /etc/hostname                                                                                                                         │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ tunnel    │ functional-051715 tunnel --alsologtostderr                                                                                                                      │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │                     │
	│ tunnel    │ functional-051715 tunnel --alsologtostderr                                                                                                                      │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │                     │
	│ stop      │ -p embed-certs-919237 --alsologtostderr -v=3                                                                                                                    │ embed-certs-919237     │ jenkins │ v1.37.0 │ 27 Oct 25 19:40 UTC │ 27 Oct 25 19:41 UTC │
	│ tunnel    │ functional-051715 tunnel --alsologtostderr                                                                                                                      │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-051715 --alsologtostderr -v=1                                                                                                  │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ start     │ -p functional-051715 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                                       │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │                     │
	│ start     │ -p functional-051715 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                 │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │                     │
	│ addons    │ functional-051715 addons list                                                                                                                                   │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ addons    │ functional-051715 addons list -o json                                                                                                                           │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image     │ functional-051715 image load --daemon kicbase/echo-server:functional-051715 --alsologtostderr                                                                   │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image     │ functional-051715 image ls                                                                                                                                      │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image     │ functional-051715 image load --daemon kicbase/echo-server:functional-051715 --alsologtostderr                                                                   │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image     │ functional-051715 image ls                                                                                                                                      │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image     │ functional-051715 image load --daemon kicbase/echo-server:functional-051715 --alsologtostderr                                                                   │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image     │ functional-051715 image ls                                                                                                                                      │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image     │ functional-051715 image save kicbase/echo-server:functional-051715 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image     │ functional-051715 image rm kicbase/echo-server:functional-051715 --alsologtostderr                                                                              │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ addons    │ enable dashboard -p embed-certs-919237 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                   │ embed-certs-919237     │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ start     │ -p embed-certs-919237 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1          │ embed-certs-919237     │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │                     │
	│ image     │ old-k8s-version-468959 image list --format=json                                                                                                                 │ old-k8s-version-468959 │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ pause     │ -p old-k8s-version-468959 --alsologtostderr -v=1                                                                                                                │ old-k8s-version-468959 │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │                     │
	│ addons    │ enable metrics-server -p no-preload-095885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                         │ no-preload-095885      │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 19:41:00
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 19:41:00.814297  594803 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:41:00.814654  594803 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:41:00.814666  594803 out.go:374] Setting ErrFile to fd 2...
	I1027 19:41:00.814672  594803 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:41:00.815019  594803 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 19:41:00.815611  594803 out.go:368] Setting JSON to false
	I1027 19:41:00.819938  594803 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8610,"bootTime":1761585451,"procs":357,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 19:41:00.820105  594803 start.go:141] virtualization: kvm guest
	I1027 19:41:00.822276  594803 out.go:179] * [embed-certs-919237] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 19:41:00.824552  594803 notify.go:220] Checking for updates...
	I1027 19:41:00.824589  594803 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 19:41:00.825920  594803 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 19:41:00.827493  594803 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:41:00.829068  594803 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-352833/.minikube
	I1027 19:41:00.830346  594803 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 19:41:00.831676  594803 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 19:41:00.833634  594803 config.go:182] Loaded profile config "embed-certs-919237": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:41:00.834328  594803 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 19:41:00.865817  594803 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 19:41:00.865940  594803 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:41:00.939681  594803 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-27 19:41:00.928512266 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:41:00.939791  594803 docker.go:318] overlay module found
	I1027 19:41:00.942901  594803 out.go:179] * Using the docker driver based on existing profile
	I1027 19:41:00.944254  594803 start.go:305] selected driver: docker
	I1027 19:41:00.944276  594803 start.go:925] validating driver "docker" against &{Name:embed-certs-919237 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-919237 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:41:00.944438  594803 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 19:41:00.945045  594803 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:41:01.009596  594803 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-27 19:41:00.998454107 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:41:01.009899  594803 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 19:41:01.009935  594803 cni.go:84] Creating CNI manager for ""
	I1027 19:41:01.009994  594803 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:41:01.010033  594803 start.go:349] cluster config:
	{Name:embed-certs-919237 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-919237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:41:01.012102  594803 out.go:179] * Starting "embed-certs-919237" primary control-plane node in "embed-certs-919237" cluster
	I1027 19:41:01.013642  594803 cache.go:123] Beginning downloading kic base image for docker with crio
	I1027 19:41:01.015027  594803 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 19:41:01.016245  594803 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:41:01.016338  594803 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1027 19:41:01.016364  594803 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 19:41:01.016374  594803 cache.go:58] Caching tarball of preloaded images
	I1027 19:41:01.016491  594803 preload.go:233] Found /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1027 19:41:01.016508  594803 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 19:41:01.016671  594803 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/embed-certs-919237/config.json ...
	I1027 19:41:01.043736  594803 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 19:41:01.043771  594803 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 19:41:01.043794  594803 cache.go:232] Successfully downloaded all kic artifacts
	I1027 19:41:01.043828  594803 start.go:360] acquireMachinesLock for embed-certs-919237: {Name:mka6dd5e9788015cfc40a76e0480af6167e6c17e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:41:01.043925  594803 start.go:364] duration metric: took 53.412µs to acquireMachinesLock for "embed-certs-919237"
	I1027 19:41:01.043948  594803 start.go:96] Skipping create...Using existing machine configuration
	I1027 19:41:01.043956  594803 fix.go:54] fixHost starting: 
	I1027 19:41:01.044294  594803 cli_runner.go:164] Run: docker container inspect embed-certs-919237 --format={{.State.Status}}
	I1027 19:41:01.063875  594803 fix.go:112] recreateIfNeeded on embed-certs-919237: state=Stopped err=<nil>
	W1027 19:41:01.063922  594803 fix.go:138] unexpected machine state, will restart: <nil>
	I1027 19:40:58.026030  565798 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1027 19:40:58.026613  565798 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1027 19:40:58.026685  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:40:58.026737  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:40:58.057129  565798 cri.go:89] found id: "f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:40:58.057167  565798 cri.go:89] found id: ""
	I1027 19:40:58.057177  565798 logs.go:282] 1 containers: [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8]
	I1027 19:40:58.057246  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:40:58.061704  565798 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:40:58.061775  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:40:58.090405  565798 cri.go:89] found id: ""
	I1027 19:40:58.090438  565798 logs.go:282] 0 containers: []
	W1027 19:40:58.090450  565798 logs.go:284] No container was found matching "etcd"
	I1027 19:40:58.090459  565798 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:40:58.090524  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:40:58.120023  565798 cri.go:89] found id: ""
	I1027 19:40:58.120053  565798 logs.go:282] 0 containers: []
	W1027 19:40:58.120064  565798 logs.go:284] No container was found matching "coredns"
	I1027 19:40:58.120074  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:40:58.120150  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:40:58.150017  565798 cri.go:89] found id: "15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:40:58.150043  565798 cri.go:89] found id: ""
	I1027 19:40:58.150052  565798 logs.go:282] 1 containers: [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8]
	I1027 19:40:58.150108  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:40:58.154647  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:40:58.154712  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:40:58.183854  565798 cri.go:89] found id: ""
	I1027 19:40:58.183879  565798 logs.go:282] 0 containers: []
	W1027 19:40:58.183888  565798 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:40:58.183894  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:40:58.183943  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:40:58.212083  565798 cri.go:89] found id: "38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:40:58.212102  565798 cri.go:89] found id: "df060ac929bc7a5dac337c7e85e10b2f4a51413be70b8202c8307826c4a72947"
	I1027 19:40:58.212106  565798 cri.go:89] found id: ""
	I1027 19:40:58.212114  565798 logs.go:282] 2 containers: [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77 df060ac929bc7a5dac337c7e85e10b2f4a51413be70b8202c8307826c4a72947]
	I1027 19:40:58.212185  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:40:58.216480  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:40:58.220450  565798 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:40:58.220522  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:40:58.249431  565798 cri.go:89] found id: ""
	I1027 19:40:58.249455  565798 logs.go:282] 0 containers: []
	W1027 19:40:58.249463  565798 logs.go:284] No container was found matching "kindnet"
	I1027 19:40:58.249469  565798 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:40:58.249515  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:40:58.278301  565798 cri.go:89] found id: ""
	I1027 19:40:58.278327  565798 logs.go:282] 0 containers: []
	W1027 19:40:58.278334  565798 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:40:58.278352  565798 logs.go:123] Gathering logs for kubelet ...
	I1027 19:40:58.278366  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:40:58.361232  565798 logs.go:123] Gathering logs for dmesg ...
	I1027 19:40:58.361276  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:40:58.384714  565798 logs.go:123] Gathering logs for kube-controller-manager [df060ac929bc7a5dac337c7e85e10b2f4a51413be70b8202c8307826c4a72947] ...
	I1027 19:40:58.384753  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 df060ac929bc7a5dac337c7e85e10b2f4a51413be70b8202c8307826c4a72947"
	I1027 19:40:58.415348  565798 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:40:58.415382  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:40:58.463651  565798 logs.go:123] Gathering logs for container status ...
	I1027 19:40:58.463690  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:40:58.498078  565798 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:40:58.498125  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:40:58.558995  565798 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:40:58.559018  565798 logs.go:123] Gathering logs for kube-apiserver [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8] ...
	I1027 19:40:58.559035  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:40:58.594584  565798 logs.go:123] Gathering logs for kube-scheduler [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8] ...
	I1027 19:40:58.594625  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:40:58.645514  565798 logs.go:123] Gathering logs for kube-controller-manager [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77] ...
	I1027 19:40:58.645551  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:01.178225  565798 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1027 19:41:01.178694  565798 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1027 19:41:01.178745  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:41:01.178791  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:41:01.210901  565798 cri.go:89] found id: "f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:41:01.210925  565798 cri.go:89] found id: ""
	I1027 19:41:01.210936  565798 logs.go:282] 1 containers: [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8]
	I1027 19:41:01.211006  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:01.215571  565798 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:41:01.215658  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:41:01.247466  565798 cri.go:89] found id: ""
	I1027 19:41:01.247503  565798 logs.go:282] 0 containers: []
	W1027 19:41:01.247514  565798 logs.go:284] No container was found matching "etcd"
	I1027 19:41:01.247523  565798 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:41:01.247591  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:41:01.281986  565798 cri.go:89] found id: ""
	I1027 19:41:01.282024  565798 logs.go:282] 0 containers: []
	W1027 19:41:01.282036  565798 logs.go:284] No container was found matching "coredns"
	I1027 19:41:01.282044  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:41:01.282106  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:41:01.312897  565798 cri.go:89] found id: "15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:41:01.312929  565798 cri.go:89] found id: ""
	I1027 19:41:01.312940  565798 logs.go:282] 1 containers: [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8]
	I1027 19:41:01.313010  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:01.317732  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:41:01.317823  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:41:01.349672  565798 cri.go:89] found id: ""
	I1027 19:41:01.349702  565798 logs.go:282] 0 containers: []
	W1027 19:41:01.349714  565798 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:41:01.349722  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:41:01.349783  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:41:01.383805  565798 cri.go:89] found id: "38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:01.383830  565798 cri.go:89] found id: ""
	I1027 19:41:01.383842  565798 logs.go:282] 1 containers: [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77]
	I1027 19:41:01.383906  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:01.388901  565798 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:41:01.388976  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:41:01.421041  565798 cri.go:89] found id: ""
	I1027 19:41:01.421066  565798 logs.go:282] 0 containers: []
	W1027 19:41:01.421074  565798 logs.go:284] No container was found matching "kindnet"
	I1027 19:41:01.421082  565798 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:41:01.421184  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:41:01.451707  565798 cri.go:89] found id: ""
	I1027 19:41:01.451736  565798 logs.go:282] 0 containers: []
	W1027 19:41:01.451744  565798 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:41:01.451754  565798 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:41:01.451766  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:41:01.510573  565798 logs.go:123] Gathering logs for container status ...
	I1027 19:41:01.510618  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1027 19:41:00.819934  585556 node_ready.go:57] node "no-preload-095885" has "Ready":"False" status (will retry)
	I1027 19:41:02.819169  585556 node_ready.go:49] node "no-preload-095885" is "Ready"
	I1027 19:41:02.819209  585556 node_ready.go:38] duration metric: took 13.003808085s for node "no-preload-095885" to be "Ready" ...
	I1027 19:41:02.819229  585556 api_server.go:52] waiting for apiserver process to appear ...
	I1027 19:41:02.819306  585556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:41:02.833188  585556 api_server.go:72] duration metric: took 13.35947841s to wait for apiserver process to appear ...
	I1027 19:41:02.833220  585556 api_server.go:88] waiting for apiserver healthz status ...
	I1027 19:41:02.833241  585556 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 19:41:02.838750  585556 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1027 19:41:02.839890  585556 api_server.go:141] control plane version: v1.34.1
	I1027 19:41:02.839920  585556 api_server.go:131] duration metric: took 6.693245ms to wait for apiserver health ...
	I1027 19:41:02.839930  585556 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 19:41:02.843755  585556 system_pods.go:59] 8 kube-system pods found
	I1027 19:41:02.843791  585556 system_pods.go:61] "coredns-66bc5c9577-gwqvg" [3bcd75c1-f42f-4252-b1fc-2bdab3c8373e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 19:41:02.843797  585556 system_pods.go:61] "etcd-no-preload-095885" [398272ac-d5cc-44d6-bf2a-3469d316b417] Running
	I1027 19:41:02.843803  585556 system_pods.go:61] "kindnet-8lbz5" [42b05fb3-87d3-412f-ac73-cb73a737aab1] Running
	I1027 19:41:02.843807  585556 system_pods.go:61] "kube-apiserver-no-preload-095885" [d609db88-4097-43b5-b881-a445344edf64] Running
	I1027 19:41:02.843811  585556 system_pods.go:61] "kube-controller-manager-no-preload-095885" [b1bfd486-ed1f-4f8b-a08b-de7739f1dd9c] Running
	I1027 19:41:02.843814  585556 system_pods.go:61] "kube-proxy-wz64m" [339cb07c-5319-4d8b-ab61-a6d377c2bc61] Running
	I1027 19:41:02.843817  585556 system_pods.go:61] "kube-scheduler-no-preload-095885" [7ba1709a-c913-40f3-833b-bee63057ce6e] Running
	I1027 19:41:02.843822  585556 system_pods.go:61] "storage-provisioner" [e8283562-be98-444b-b591-a0239860e729] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 19:41:02.843829  585556 system_pods.go:74] duration metric: took 3.89196ms to wait for pod list to return data ...
	I1027 19:41:02.843841  585556 default_sa.go:34] waiting for default service account to be created ...
	I1027 19:41:02.846583  585556 default_sa.go:45] found service account: "default"
	I1027 19:41:02.846611  585556 default_sa.go:55] duration metric: took 2.763753ms for default service account to be created ...
	I1027 19:41:02.846622  585556 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 19:41:02.849879  585556 system_pods.go:86] 8 kube-system pods found
	I1027 19:41:02.849914  585556 system_pods.go:89] "coredns-66bc5c9577-gwqvg" [3bcd75c1-f42f-4252-b1fc-2bdab3c8373e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 19:41:02.849920  585556 system_pods.go:89] "etcd-no-preload-095885" [398272ac-d5cc-44d6-bf2a-3469d316b417] Running
	I1027 19:41:02.849926  585556 system_pods.go:89] "kindnet-8lbz5" [42b05fb3-87d3-412f-ac73-cb73a737aab1] Running
	I1027 19:41:02.849930  585556 system_pods.go:89] "kube-apiserver-no-preload-095885" [d609db88-4097-43b5-b881-a445344edf64] Running
	I1027 19:41:02.849935  585556 system_pods.go:89] "kube-controller-manager-no-preload-095885" [b1bfd486-ed1f-4f8b-a08b-de7739f1dd9c] Running
	I1027 19:41:02.849938  585556 system_pods.go:89] "kube-proxy-wz64m" [339cb07c-5319-4d8b-ab61-a6d377c2bc61] Running
	I1027 19:41:02.849942  585556 system_pods.go:89] "kube-scheduler-no-preload-095885" [7ba1709a-c913-40f3-833b-bee63057ce6e] Running
	I1027 19:41:02.849946  585556 system_pods.go:89] "storage-provisioner" [e8283562-be98-444b-b591-a0239860e729] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 19:41:02.849981  585556 retry.go:31] will retry after 208.530125ms: missing components: kube-dns
	I1027 19:41:03.063213  585556 system_pods.go:86] 8 kube-system pods found
	I1027 19:41:03.063246  585556 system_pods.go:89] "coredns-66bc5c9577-gwqvg" [3bcd75c1-f42f-4252-b1fc-2bdab3c8373e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 19:41:03.063252  585556 system_pods.go:89] "etcd-no-preload-095885" [398272ac-d5cc-44d6-bf2a-3469d316b417] Running
	I1027 19:41:03.063259  585556 system_pods.go:89] "kindnet-8lbz5" [42b05fb3-87d3-412f-ac73-cb73a737aab1] Running
	I1027 19:41:03.063269  585556 system_pods.go:89] "kube-apiserver-no-preload-095885" [d609db88-4097-43b5-b881-a445344edf64] Running
	I1027 19:41:03.063273  585556 system_pods.go:89] "kube-controller-manager-no-preload-095885" [b1bfd486-ed1f-4f8b-a08b-de7739f1dd9c] Running
	I1027 19:41:03.063277  585556 system_pods.go:89] "kube-proxy-wz64m" [339cb07c-5319-4d8b-ab61-a6d377c2bc61] Running
	I1027 19:41:03.063283  585556 system_pods.go:89] "kube-scheduler-no-preload-095885" [7ba1709a-c913-40f3-833b-bee63057ce6e] Running
	I1027 19:41:03.063290  585556 system_pods.go:89] "storage-provisioner" [e8283562-be98-444b-b591-a0239860e729] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 19:41:03.063312  585556 retry.go:31] will retry after 387.065987ms: missing components: kube-dns
	I1027 19:41:03.454191  585556 system_pods.go:86] 8 kube-system pods found
	I1027 19:41:03.454223  585556 system_pods.go:89] "coredns-66bc5c9577-gwqvg" [3bcd75c1-f42f-4252-b1fc-2bdab3c8373e] Running
	I1027 19:41:03.454229  585556 system_pods.go:89] "etcd-no-preload-095885" [398272ac-d5cc-44d6-bf2a-3469d316b417] Running
	I1027 19:41:03.454233  585556 system_pods.go:89] "kindnet-8lbz5" [42b05fb3-87d3-412f-ac73-cb73a737aab1] Running
	I1027 19:41:03.454236  585556 system_pods.go:89] "kube-apiserver-no-preload-095885" [d609db88-4097-43b5-b881-a445344edf64] Running
	I1027 19:41:03.454241  585556 system_pods.go:89] "kube-controller-manager-no-preload-095885" [b1bfd486-ed1f-4f8b-a08b-de7739f1dd9c] Running
	I1027 19:41:03.454244  585556 system_pods.go:89] "kube-proxy-wz64m" [339cb07c-5319-4d8b-ab61-a6d377c2bc61] Running
	I1027 19:41:03.454248  585556 system_pods.go:89] "kube-scheduler-no-preload-095885" [7ba1709a-c913-40f3-833b-bee63057ce6e] Running
	I1027 19:41:03.454251  585556 system_pods.go:89] "storage-provisioner" [e8283562-be98-444b-b591-a0239860e729] Running
	I1027 19:41:03.454261  585556 system_pods.go:126] duration metric: took 607.631414ms to wait for k8s-apps to be running ...
	I1027 19:41:03.454271  585556 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 19:41:03.454342  585556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:41:03.469661  585556 system_svc.go:56] duration metric: took 15.375165ms WaitForService to wait for kubelet
	I1027 19:41:03.469692  585556 kubeadm.go:586] duration metric: took 13.995993942s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 19:41:03.469713  585556 node_conditions.go:102] verifying NodePressure condition ...
	I1027 19:41:03.473051  585556 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1027 19:41:03.473084  585556 node_conditions.go:123] node cpu capacity is 8
	I1027 19:41:03.473098  585556 node_conditions.go:105] duration metric: took 3.378892ms to run NodePressure ...
	I1027 19:41:03.473110  585556 start.go:241] waiting for startup goroutines ...
	I1027 19:41:03.473116  585556 start.go:246] waiting for cluster config update ...
	I1027 19:41:03.473127  585556 start.go:255] writing updated cluster config ...
	I1027 19:41:03.473547  585556 ssh_runner.go:195] Run: rm -f paused
	I1027 19:41:03.478479  585556 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 19:41:03.482432  585556 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gwqvg" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:03.487649  585556 pod_ready.go:94] pod "coredns-66bc5c9577-gwqvg" is "Ready"
	I1027 19:41:03.487680  585556 pod_ready.go:86] duration metric: took 5.219183ms for pod "coredns-66bc5c9577-gwqvg" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:03.489989  585556 pod_ready.go:83] waiting for pod "etcd-no-preload-095885" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:03.494299  585556 pod_ready.go:94] pod "etcd-no-preload-095885" is "Ready"
	I1027 19:41:03.494327  585556 pod_ready.go:86] duration metric: took 4.312641ms for pod "etcd-no-preload-095885" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:03.496451  585556 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-095885" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:03.500973  585556 pod_ready.go:94] pod "kube-apiserver-no-preload-095885" is "Ready"
	I1027 19:41:03.501001  585556 pod_ready.go:86] duration metric: took 4.521998ms for pod "kube-apiserver-no-preload-095885" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:03.503226  585556 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-095885" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:03.883037  585556 pod_ready.go:94] pod "kube-controller-manager-no-preload-095885" is "Ready"
	I1027 19:41:03.883068  585556 pod_ready.go:86] duration metric: took 379.813717ms for pod "kube-controller-manager-no-preload-095885" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:04.083654  585556 pod_ready.go:83] waiting for pod "kube-proxy-wz64m" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:04.482474  585556 pod_ready.go:94] pod "kube-proxy-wz64m" is "Ready"
	I1027 19:41:04.482513  585556 pod_ready.go:86] duration metric: took 398.821516ms for pod "kube-proxy-wz64m" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:04.682931  585556 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-095885" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:05.082246  585556 pod_ready.go:94] pod "kube-scheduler-no-preload-095885" is "Ready"
	I1027 19:41:05.082304  585556 pod_ready.go:86] duration metric: took 399.325532ms for pod "kube-scheduler-no-preload-095885" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:05.082322  585556 pod_ready.go:40] duration metric: took 1.603803236s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 19:41:05.130054  585556 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 19:41:05.132095  585556 out.go:179] * Done! kubectl is now configured to use "no-preload-095885" cluster and "default" namespace by default
	I1027 19:41:01.066520  594803 out.go:252] * Restarting existing docker container for "embed-certs-919237" ...
	I1027 19:41:01.066614  594803 cli_runner.go:164] Run: docker start embed-certs-919237
	I1027 19:41:01.345192  594803 cli_runner.go:164] Run: docker container inspect embed-certs-919237 --format={{.State.Status}}
	I1027 19:41:01.367723  594803 kic.go:430] container "embed-certs-919237" state is running.
	I1027 19:41:01.368113  594803 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-919237
	I1027 19:41:01.390202  594803 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/embed-certs-919237/config.json ...
	I1027 19:41:01.390514  594803 machine.go:93] provisionDockerMachine start ...
	I1027 19:41:01.390591  594803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-919237
	I1027 19:41:01.413027  594803 main.go:141] libmachine: Using SSH client type: native
	I1027 19:41:01.413398  594803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33445 <nil> <nil>}
	I1027 19:41:01.413418  594803 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 19:41:01.414196  594803 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47452->127.0.0.1:33445: read: connection reset by peer
	I1027 19:41:04.563874  594803 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-919237
	
	I1027 19:41:04.563910  594803 ubuntu.go:182] provisioning hostname "embed-certs-919237"
	I1027 19:41:04.563984  594803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-919237
	I1027 19:41:04.585857  594803 main.go:141] libmachine: Using SSH client type: native
	I1027 19:41:04.586108  594803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33445 <nil> <nil>}
	I1027 19:41:04.586127  594803 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-919237 && echo "embed-certs-919237" | sudo tee /etc/hostname
	I1027 19:41:04.745340  594803 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-919237
	
	I1027 19:41:04.745465  594803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-919237
	I1027 19:41:04.769321  594803 main.go:141] libmachine: Using SSH client type: native
	I1027 19:41:04.769548  594803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33445 <nil> <nil>}
	I1027 19:41:04.769566  594803 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-919237' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-919237/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-919237' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 19:41:04.920012  594803 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 19:41:04.920046  594803 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21801-352833/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-352833/.minikube}
	I1027 19:41:04.920074  594803 ubuntu.go:190] setting up certificates
	I1027 19:41:04.920094  594803 provision.go:84] configureAuth start
	I1027 19:41:04.920183  594803 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-919237
	I1027 19:41:04.943841  594803 provision.go:143] copyHostCerts
	I1027 19:41:04.943927  594803 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem, removing ...
	I1027 19:41:04.943948  594803 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem
	I1027 19:41:04.944028  594803 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem (1123 bytes)
	I1027 19:41:04.944239  594803 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem, removing ...
	I1027 19:41:04.944257  594803 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem
	I1027 19:41:04.944296  594803 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem (1679 bytes)
	I1027 19:41:04.944383  594803 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem, removing ...
	I1027 19:41:04.944395  594803 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem
	I1027 19:41:04.944423  594803 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem (1078 bytes)
	I1027 19:41:04.944475  594803 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem org=jenkins.embed-certs-919237 san=[127.0.0.1 192.168.94.2 embed-certs-919237 localhost minikube]
	I1027 19:41:05.155892  594803 provision.go:177] copyRemoteCerts
	I1027 19:41:05.155953  594803 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 19:41:05.156001  594803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-919237
	I1027 19:41:05.177871  594803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/embed-certs-919237/id_rsa Username:docker}
	I1027 19:41:05.283397  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 19:41:05.303860  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1027 19:41:05.323928  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 19:41:05.343816  594803 provision.go:87] duration metric: took 423.704232ms to configureAuth
	I1027 19:41:05.343849  594803 ubuntu.go:206] setting minikube options for container-runtime
	I1027 19:41:05.344062  594803 config.go:182] Loaded profile config "embed-certs-919237": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:41:05.344270  594803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-919237
	I1027 19:41:05.364828  594803 main.go:141] libmachine: Using SSH client type: native
	I1027 19:41:05.365067  594803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33445 <nil> <nil>}
	I1027 19:41:05.365089  594803 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 19:41:05.683089  594803 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 19:41:05.683117  594803 machine.go:96] duration metric: took 4.292583564s to provisionDockerMachine
	I1027 19:41:05.683160  594803 start.go:293] postStartSetup for "embed-certs-919237" (driver="docker")
	I1027 19:41:05.683178  594803 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 19:41:05.683251  594803 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 19:41:05.683341  594803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-919237
	I1027 19:41:05.704409  594803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/embed-certs-919237/id_rsa Username:docker}
	I1027 19:41:05.808620  594803 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 19:41:05.812844  594803 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 19:41:05.812879  594803 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 19:41:05.812891  594803 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-352833/.minikube/addons for local assets ...
	I1027 19:41:05.812957  594803 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-352833/.minikube/files for local assets ...
	I1027 19:41:05.813078  594803 filesync.go:149] local asset: /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem -> 3564152.pem in /etc/ssl/certs
	I1027 19:41:05.813222  594803 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 19:41:01.544316  565798 logs.go:123] Gathering logs for kubelet ...
	I1027 19:41:01.544346  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:41:01.659317  565798 logs.go:123] Gathering logs for dmesg ...
	I1027 19:41:01.659359  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:41:01.686121  565798 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:41:01.686169  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:41:01.747842  565798 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:41:01.747864  565798 logs.go:123] Gathering logs for kube-apiserver [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8] ...
	I1027 19:41:01.747878  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:41:01.793564  565798 logs.go:123] Gathering logs for kube-scheduler [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8] ...
	I1027 19:41:01.793605  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:41:01.845488  565798 logs.go:123] Gathering logs for kube-controller-manager [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77] ...
	I1027 19:41:01.845527  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:04.376444  565798 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1027 19:41:04.376990  565798 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1027 19:41:04.377046  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:41:04.377099  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:41:04.406829  565798 cri.go:89] found id: "f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:41:04.406851  565798 cri.go:89] found id: ""
	I1027 19:41:04.406859  565798 logs.go:282] 1 containers: [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8]
	I1027 19:41:04.406918  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:04.411348  565798 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:41:04.411426  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:41:04.443060  565798 cri.go:89] found id: ""
	I1027 19:41:04.443094  565798 logs.go:282] 0 containers: []
	W1027 19:41:04.443105  565798 logs.go:284] No container was found matching "etcd"
	I1027 19:41:04.443113  565798 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:41:04.443223  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:41:04.475252  565798 cri.go:89] found id: ""
	I1027 19:41:04.475280  565798 logs.go:282] 0 containers: []
	W1027 19:41:04.475288  565798 logs.go:284] No container was found matching "coredns"
	I1027 19:41:04.475295  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:41:04.475358  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:41:04.506592  565798 cri.go:89] found id: "15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:41:04.506613  565798 cri.go:89] found id: ""
	I1027 19:41:04.506622  565798 logs.go:282] 1 containers: [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8]
	I1027 19:41:04.506674  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:04.511168  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:41:04.511243  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:41:04.541392  565798 cri.go:89] found id: ""
	I1027 19:41:04.541418  565798 logs.go:282] 0 containers: []
	W1027 19:41:04.541425  565798 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:41:04.541432  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:41:04.541484  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:41:04.572329  565798 cri.go:89] found id: "38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:04.572361  565798 cri.go:89] found id: ""
	I1027 19:41:04.572370  565798 logs.go:282] 1 containers: [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77]
	I1027 19:41:04.572429  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:04.577195  565798 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:41:04.577270  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:41:04.608128  565798 cri.go:89] found id: ""
	I1027 19:41:04.608182  565798 logs.go:282] 0 containers: []
	W1027 19:41:04.608192  565798 logs.go:284] No container was found matching "kindnet"
	I1027 19:41:04.608199  565798 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:41:04.608266  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:41:04.638970  565798 cri.go:89] found id: ""
	I1027 19:41:04.639004  565798 logs.go:282] 0 containers: []
	W1027 19:41:04.639017  565798 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:41:04.639029  565798 logs.go:123] Gathering logs for kube-apiserver [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8] ...
	I1027 19:41:04.639047  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:41:04.676026  565798 logs.go:123] Gathering logs for kube-scheduler [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8] ...
	I1027 19:41:04.676066  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:41:04.729477  565798 logs.go:123] Gathering logs for kube-controller-manager [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77] ...
	I1027 19:41:04.729522  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:04.763334  565798 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:41:04.763366  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:41:04.814559  565798 logs.go:123] Gathering logs for container status ...
	I1027 19:41:04.814597  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:41:04.850968  565798 logs.go:123] Gathering logs for kubelet ...
	I1027 19:41:04.851011  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:41:04.944394  565798 logs.go:123] Gathering logs for dmesg ...
	I1027 19:41:04.944431  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:41:04.966811  565798 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:41:04.966851  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:41:05.028358  565798 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:41:05.821887  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem --> /etc/ssl/certs/3564152.pem (1708 bytes)
	I1027 19:41:05.841205  594803 start.go:296] duration metric: took 158.022167ms for postStartSetup
	I1027 19:41:05.841329  594803 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 19:41:05.841428  594803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-919237
	I1027 19:41:05.862221  594803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/embed-certs-919237/id_rsa Username:docker}
	I1027 19:41:05.962951  594803 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 19:41:05.968053  594803 fix.go:56] duration metric: took 4.924088468s for fixHost
	I1027 19:41:05.968084  594803 start.go:83] releasing machines lock for "embed-certs-919237", held for 4.924145002s
	I1027 19:41:05.968196  594803 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-919237
	I1027 19:41:05.987613  594803 ssh_runner.go:195] Run: cat /version.json
	I1027 19:41:05.987669  594803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-919237
	I1027 19:41:05.987702  594803 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 19:41:05.987789  594803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-919237
	I1027 19:41:06.007445  594803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/embed-certs-919237/id_rsa Username:docker}
	I1027 19:41:06.008274  594803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/embed-certs-919237/id_rsa Username:docker}
	I1027 19:41:06.171092  594803 ssh_runner.go:195] Run: systemctl --version
	I1027 19:41:06.179869  594803 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 19:41:06.219933  594803 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 19:41:06.225954  594803 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 19:41:06.226044  594803 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 19:41:06.236901  594803 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 19:41:06.236933  594803 start.go:495] detecting cgroup driver to use...
	I1027 19:41:06.236974  594803 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 19:41:06.237038  594803 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 19:41:06.256171  594803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 19:41:06.272267  594803 docker.go:218] disabling cri-docker service (if available) ...
	I1027 19:41:06.272335  594803 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 19:41:06.289493  594803 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 19:41:06.303711  594803 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 19:41:06.395451  594803 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 19:41:06.478021  594803 docker.go:234] disabling docker service ...
	I1027 19:41:06.478097  594803 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 19:41:06.493521  594803 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 19:41:06.507490  594803 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 19:41:06.591513  594803 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 19:41:06.682906  594803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 19:41:06.696885  594803 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 19:41:06.713250  594803 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 19:41:06.713378  594803 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:41:06.723697  594803 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 19:41:06.723794  594803 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:41:06.734257  594803 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:41:06.744505  594803 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:41:06.754791  594803 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 19:41:06.764454  594803 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:41:06.774849  594803 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:41:06.784515  594803 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:41:06.794832  594803 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 19:41:06.803521  594803 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 19:41:06.812405  594803 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:41:06.901080  594803 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 19:41:07.023003  594803 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 19:41:07.023077  594803 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 19:41:07.027729  594803 start.go:563] Will wait 60s for crictl version
	I1027 19:41:07.027821  594803 ssh_runner.go:195] Run: which crictl
	I1027 19:41:07.032087  594803 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 19:41:07.060453  594803 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 19:41:07.060549  594803 ssh_runner.go:195] Run: crio --version
	I1027 19:41:07.090930  594803 ssh_runner.go:195] Run: crio --version
	I1027 19:41:07.122696  594803 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 19:41:07.124057  594803 cli_runner.go:164] Run: docker network inspect embed-certs-919237 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 19:41:07.144121  594803 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1027 19:41:07.148817  594803 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 19:41:07.160514  594803 kubeadm.go:883] updating cluster {Name:embed-certs-919237 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-919237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 19:41:07.160677  594803 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:41:07.160758  594803 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 19:41:07.197268  594803 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 19:41:07.197294  594803 crio.go:433] Images already preloaded, skipping extraction
	I1027 19:41:07.197359  594803 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 19:41:07.224730  594803 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 19:41:07.224756  594803 cache_images.go:85] Images are preloaded, skipping loading
	I1027 19:41:07.224766  594803 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1027 19:41:07.224884  594803 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-919237 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-919237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 19:41:07.224966  594803 ssh_runner.go:195] Run: crio config
	I1027 19:41:07.273364  594803 cni.go:84] Creating CNI manager for ""
	I1027 19:41:07.273386  594803 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:41:07.273406  594803 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 19:41:07.273446  594803 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-919237 NodeName:embed-certs-919237 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 19:41:07.273615  594803 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-919237"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 19:41:07.273713  594803 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 19:41:07.283551  594803 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 19:41:07.283671  594803 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 19:41:07.292711  594803 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1027 19:41:07.307484  594803 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 19:41:07.321800  594803 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1027 19:41:07.335251  594803 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1027 19:41:07.339362  594803 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 19:41:07.350244  594803 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:41:07.434349  594803 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:41:07.464970  594803 certs.go:69] Setting up /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/embed-certs-919237 for IP: 192.168.94.2
	I1027 19:41:07.464995  594803 certs.go:195] generating shared ca certs ...
	I1027 19:41:07.465020  594803 certs.go:227] acquiring lock for ca certs: {Name:mk4bdbca32068f6f817fc35fdc496e961dc3e0d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:41:07.465244  594803 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21801-352833/.minikube/ca.key
	I1027 19:41:07.465292  594803 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.key
	I1027 19:41:07.465304  594803 certs.go:257] generating profile certs ...
	I1027 19:41:07.465403  594803 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/embed-certs-919237/client.key
	I1027 19:41:07.465450  594803 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/embed-certs-919237/apiserver.key.3faa2aa5
	I1027 19:41:07.465488  594803 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/embed-certs-919237/proxy-client.key
	I1027 19:41:07.465591  594803 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/356415.pem (1338 bytes)
	W1027 19:41:07.465626  594803 certs.go:480] ignoring /home/jenkins/minikube-integration/21801-352833/.minikube/certs/356415_empty.pem, impossibly tiny 0 bytes
	I1027 19:41:07.465636  594803 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 19:41:07.465656  594803 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem (1078 bytes)
	I1027 19:41:07.465680  594803 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem (1123 bytes)
	I1027 19:41:07.465706  594803 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem (1679 bytes)
	I1027 19:41:07.465755  594803 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem (1708 bytes)
	I1027 19:41:07.466444  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 19:41:07.487514  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 19:41:07.509307  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 19:41:07.532458  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1027 19:41:07.564071  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/embed-certs-919237/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1027 19:41:07.586349  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/embed-certs-919237/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 19:41:07.606465  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/embed-certs-919237/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 19:41:07.627059  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/embed-certs-919237/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 19:41:07.648181  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 19:41:07.672545  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/certs/356415.pem --> /usr/share/ca-certificates/356415.pem (1338 bytes)
	I1027 19:41:07.693483  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem --> /usr/share/ca-certificates/3564152.pem (1708 bytes)
	I1027 19:41:07.715889  594803 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 19:41:07.732429  594803 ssh_runner.go:195] Run: openssl version
	I1027 19:41:07.740863  594803 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/356415.pem && ln -fs /usr/share/ca-certificates/356415.pem /etc/ssl/certs/356415.pem"
	I1027 19:41:07.751652  594803 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356415.pem
	I1027 19:41:07.756427  594803 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 19:02 /usr/share/ca-certificates/356415.pem
	I1027 19:41:07.756508  594803 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356415.pem
	I1027 19:41:07.796822  594803 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/356415.pem /etc/ssl/certs/51391683.0"
	I1027 19:41:07.807165  594803 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3564152.pem && ln -fs /usr/share/ca-certificates/3564152.pem /etc/ssl/certs/3564152.pem"
	I1027 19:41:07.817111  594803 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3564152.pem
	I1027 19:41:07.821699  594803 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 19:02 /usr/share/ca-certificates/3564152.pem
	I1027 19:41:07.821774  594803 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3564152.pem
	I1027 19:41:07.862104  594803 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3564152.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 19:41:07.872082  594803 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 19:41:07.882661  594803 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:41:07.888248  594803 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:41:07.888325  594803 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:41:07.927092  594803 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 19:41:07.936711  594803 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 19:41:07.941329  594803 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 19:41:07.982744  594803 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 19:41:08.036882  594803 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 19:41:08.086334  594803 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 19:41:08.146052  594803 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 19:41:08.191698  594803 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 19:41:08.228527  594803 kubeadm.go:400] StartCluster: {Name:embed-certs-919237 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-919237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:41:08.228643  594803 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 19:41:08.228710  594803 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 19:41:08.261293  594803 cri.go:89] found id: "d5a5c65a74b4b0bac782941ddf5cfc5e1c95eb29dbc563a89bc74143a3d75be8"
	I1027 19:41:08.261319  594803 cri.go:89] found id: "f0dcb6f33c4a16c8aabf1c9522c219dfe57ce0438d6eedb8d11b3bbed06bf220"
	I1027 19:41:08.261324  594803 cri.go:89] found id: "d17bd312e4c2b6e68ce5e1c0006ad10d3d74b77c3bc3e8570e4526763c6914a9"
	I1027 19:41:08.261327  594803 cri.go:89] found id: "31682e1eceede1979fd31aa2e96a71541d29f7d036de012b0c0a406025482670"
	I1027 19:41:08.261344  594803 cri.go:89] found id: ""
	I1027 19:41:08.261398  594803 ssh_runner.go:195] Run: sudo runc list -f json
	W1027 19:41:08.275475  594803 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:41:08Z" level=error msg="open /run/runc: no such file or directory"
	I1027 19:41:08.275556  594803 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 19:41:08.285008  594803 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1027 19:41:08.285028  594803 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1027 19:41:08.285080  594803 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 19:41:08.292877  594803 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 19:41:08.293734  594803 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-919237" does not appear in /home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:41:08.294188  594803 kubeconfig.go:62] /home/jenkins/minikube-integration/21801-352833/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-919237" cluster setting kubeconfig missing "embed-certs-919237" context setting]
	I1027 19:41:08.294867  594803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/kubeconfig: {Name:mk24cbe512a6907c874f3fb7a85390a8f9fd2b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:41:08.296560  594803 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 19:41:08.304858  594803 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.94.2
	I1027 19:41:08.304893  594803 kubeadm.go:601] duration metric: took 19.857495ms to restartPrimaryControlPlane
	I1027 19:41:08.304904  594803 kubeadm.go:402] duration metric: took 76.392154ms to StartCluster
	I1027 19:41:08.304921  594803 settings.go:142] acquiring lock: {Name:mk8304c2106bf310642e0949fc0266ccb50f2f95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:41:08.304992  594803 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:41:08.306608  594803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/kubeconfig: {Name:mk24cbe512a6907c874f3fb7a85390a8f9fd2b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:41:08.306895  594803 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 19:41:08.306966  594803 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 19:41:08.307088  594803 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-919237"
	I1027 19:41:08.307112  594803 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-919237"
	W1027 19:41:08.307120  594803 addons.go:247] addon storage-provisioner should already be in state true
	I1027 19:41:08.307121  594803 addons.go:69] Setting dashboard=true in profile "embed-certs-919237"
	I1027 19:41:08.307180  594803 config.go:182] Loaded profile config "embed-certs-919237": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:41:08.307172  594803 addons.go:69] Setting default-storageclass=true in profile "embed-certs-919237"
	I1027 19:41:08.307206  594803 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-919237"
	I1027 19:41:08.307185  594803 host.go:66] Checking if "embed-certs-919237" exists ...
	I1027 19:41:08.307188  594803 addons.go:238] Setting addon dashboard=true in "embed-certs-919237"
	W1027 19:41:08.307376  594803 addons.go:247] addon dashboard should already be in state true
	I1027 19:41:08.307407  594803 host.go:66] Checking if "embed-certs-919237" exists ...
	I1027 19:41:08.307583  594803 cli_runner.go:164] Run: docker container inspect embed-certs-919237 --format={{.State.Status}}
	I1027 19:41:08.307745  594803 cli_runner.go:164] Run: docker container inspect embed-certs-919237 --format={{.State.Status}}
	I1027 19:41:08.307873  594803 cli_runner.go:164] Run: docker container inspect embed-certs-919237 --format={{.State.Status}}
	I1027 19:41:08.309349  594803 out.go:179] * Verifying Kubernetes components...
	I1027 19:41:08.310781  594803 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:41:08.336188  594803 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1027 19:41:08.336216  594803 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:41:08.336832  594803 addons.go:238] Setting addon default-storageclass=true in "embed-certs-919237"
	W1027 19:41:08.336855  594803 addons.go:247] addon default-storageclass should already be in state true
	I1027 19:41:08.336886  594803 host.go:66] Checking if "embed-certs-919237" exists ...
	I1027 19:41:08.337405  594803 cli_runner.go:164] Run: docker container inspect embed-certs-919237 --format={{.State.Status}}
	I1027 19:41:08.337895  594803 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:41:08.337913  594803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 19:41:08.337970  594803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-919237
	I1027 19:41:08.339243  594803 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1027 19:41:08.340863  594803 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1027 19:41:08.340892  594803 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1027 19:41:08.340959  594803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-919237
	I1027 19:41:08.371713  594803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/embed-certs-919237/id_rsa Username:docker}
	I1027 19:41:08.378869  594803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/embed-certs-919237/id_rsa Username:docker}
	I1027 19:41:08.379420  594803 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 19:41:08.379443  594803 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 19:41:08.379523  594803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-919237
	I1027 19:41:08.404654  594803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/embed-certs-919237/id_rsa Username:docker}
	I1027 19:41:08.459858  594803 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:41:08.474523  594803 node_ready.go:35] waiting up to 6m0s for node "embed-certs-919237" to be "Ready" ...
	I1027 19:41:08.494692  594803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:41:08.501377  594803 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1027 19:41:08.501402  594803 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1027 19:41:08.517164  594803 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1027 19:41:08.517189  594803 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1027 19:41:08.528162  594803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 19:41:08.536218  594803 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1027 19:41:08.536248  594803 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1027 19:41:08.555432  594803 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1027 19:41:08.555459  594803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1027 19:41:08.577695  594803 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1027 19:41:08.577726  594803 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1027 19:41:08.596623  594803 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1027 19:41:08.596657  594803 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1027 19:41:08.612731  594803 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1027 19:41:08.612763  594803 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1027 19:41:08.627030  594803 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1027 19:41:08.627060  594803 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1027 19:41:08.641348  594803 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 19:41:08.641379  594803 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1027 19:41:08.656654  594803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 19:41:09.985803  594803 node_ready.go:49] node "embed-certs-919237" is "Ready"
	I1027 19:41:09.985838  594803 node_ready.go:38] duration metric: took 1.511271197s for node "embed-certs-919237" to be "Ready" ...
	I1027 19:41:09.985856  594803 api_server.go:52] waiting for apiserver process to appear ...
	I1027 19:41:09.985916  594803 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:41:10.512525  594803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.017790889s)
	I1027 19:41:10.512570  594803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.984382968s)
	I1027 19:41:10.512737  594803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.856029763s)
	I1027 19:41:10.512758  594803 api_server.go:72] duration metric: took 2.205827226s to wait for apiserver process to appear ...
	I1027 19:41:10.512770  594803 api_server.go:88] waiting for apiserver healthz status ...
	I1027 19:41:10.512790  594803 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1027 19:41:10.514667  594803 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-919237 addons enable metrics-server
	
	I1027 19:41:10.519068  594803 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 19:41:10.519098  594803 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 19:41:10.525420  594803 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1027 19:41:10.526779  594803 addons.go:514] duration metric: took 2.219821783s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1027 19:41:07.528527  565798 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1027 19:41:07.529038  565798 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1027 19:41:07.529097  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:41:07.529167  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:41:07.570906  565798 cri.go:89] found id: "f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:41:07.570937  565798 cri.go:89] found id: ""
	I1027 19:41:07.570949  565798 logs.go:282] 1 containers: [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8]
	I1027 19:41:07.571019  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:07.575599  565798 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:41:07.575660  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:41:07.605990  565798 cri.go:89] found id: ""
	I1027 19:41:07.606014  565798 logs.go:282] 0 containers: []
	W1027 19:41:07.606023  565798 logs.go:284] No container was found matching "etcd"
	I1027 19:41:07.606028  565798 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:41:07.606087  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:41:07.638584  565798 cri.go:89] found id: ""
	I1027 19:41:07.638610  565798 logs.go:282] 0 containers: []
	W1027 19:41:07.638619  565798 logs.go:284] No container was found matching "coredns"
	I1027 19:41:07.638626  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:41:07.638673  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:41:07.670909  565798 cri.go:89] found id: "15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:41:07.670935  565798 cri.go:89] found id: ""
	I1027 19:41:07.670946  565798 logs.go:282] 1 containers: [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8]
	I1027 19:41:07.671012  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:07.676493  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:41:07.676572  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:41:07.707704  565798 cri.go:89] found id: ""
	I1027 19:41:07.707730  565798 logs.go:282] 0 containers: []
	W1027 19:41:07.707738  565798 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:41:07.707744  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:41:07.707804  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:41:07.738631  565798 cri.go:89] found id: "38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:07.738651  565798 cri.go:89] found id: ""
	I1027 19:41:07.738663  565798 logs.go:282] 1 containers: [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77]
	I1027 19:41:07.738722  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:07.743367  565798 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:41:07.743451  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:41:07.775208  565798 cri.go:89] found id: ""
	I1027 19:41:07.775238  565798 logs.go:282] 0 containers: []
	W1027 19:41:07.775252  565798 logs.go:284] No container was found matching "kindnet"
	I1027 19:41:07.775261  565798 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:41:07.775339  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:41:07.805721  565798 cri.go:89] found id: ""
	I1027 19:41:07.805749  565798 logs.go:282] 0 containers: []
	W1027 19:41:07.805759  565798 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:41:07.805773  565798 logs.go:123] Gathering logs for dmesg ...
	I1027 19:41:07.805797  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:41:07.829611  565798 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:41:07.829647  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:41:07.894281  565798 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:41:07.894316  565798 logs.go:123] Gathering logs for kube-apiserver [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8] ...
	I1027 19:41:07.894338  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:41:07.930602  565798 logs.go:123] Gathering logs for kube-scheduler [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8] ...
	I1027 19:41:07.930636  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:41:07.985189  565798 logs.go:123] Gathering logs for kube-controller-manager [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77] ...
	I1027 19:41:07.985226  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:08.023545  565798 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:41:08.023578  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:41:08.093343  565798 logs.go:123] Gathering logs for container status ...
	I1027 19:41:08.093385  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:41:08.145553  565798 logs.go:123] Gathering logs for kubelet ...
	I1027 19:41:08.145592  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:41:10.748218  565798 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1027 19:41:10.748717  565798 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1027 19:41:10.748775  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:41:10.748830  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:41:10.778542  565798 cri.go:89] found id: "f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:41:10.778563  565798 cri.go:89] found id: ""
	I1027 19:41:10.778572  565798 logs.go:282] 1 containers: [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8]
	I1027 19:41:10.778626  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:10.782948  565798 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:41:10.783005  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:41:10.810590  565798 cri.go:89] found id: ""
	I1027 19:41:10.810619  565798 logs.go:282] 0 containers: []
	W1027 19:41:10.810631  565798 logs.go:284] No container was found matching "etcd"
	I1027 19:41:10.810642  565798 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:41:10.810705  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:41:10.841630  565798 cri.go:89] found id: ""
	I1027 19:41:10.841659  565798 logs.go:282] 0 containers: []
	W1027 19:41:10.841670  565798 logs.go:284] No container was found matching "coredns"
	I1027 19:41:10.841678  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:41:10.841747  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:41:10.881274  565798 cri.go:89] found id: "15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:41:10.881300  565798 cri.go:89] found id: ""
	I1027 19:41:10.881311  565798 logs.go:282] 1 containers: [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8]
	I1027 19:41:10.881370  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:10.886646  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:41:10.886736  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:41:10.929911  565798 cri.go:89] found id: ""
	I1027 19:41:10.929943  565798 logs.go:282] 0 containers: []
	W1027 19:41:10.929954  565798 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:41:10.929962  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:41:10.930024  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:41:10.968851  565798 cri.go:89] found id: "38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:10.968878  565798 cri.go:89] found id: ""
	I1027 19:41:10.968888  565798 logs.go:282] 1 containers: [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77]
	I1027 19:41:10.968948  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:10.974365  565798 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:41:10.974432  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:41:11.004971  565798 cri.go:89] found id: ""
	I1027 19:41:11.004997  565798 logs.go:282] 0 containers: []
	W1027 19:41:11.005005  565798 logs.go:284] No container was found matching "kindnet"
	I1027 19:41:11.005011  565798 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:41:11.005072  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:41:11.036769  565798 cri.go:89] found id: ""
	I1027 19:41:11.036802  565798 logs.go:282] 0 containers: []
	W1027 19:41:11.036814  565798 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:41:11.036827  565798 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:41:11.036845  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:41:11.109616  565798 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:41:11.109640  565798 logs.go:123] Gathering logs for kube-apiserver [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8] ...
	I1027 19:41:11.109659  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:41:11.149761  565798 logs.go:123] Gathering logs for kube-scheduler [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8] ...
	I1027 19:41:11.149808  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:41:11.209309  565798 logs.go:123] Gathering logs for kube-controller-manager [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77] ...
	I1027 19:41:11.209355  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:11.238293  565798 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:41:11.238330  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:41:11.290773  565798 logs.go:123] Gathering logs for container status ...
	I1027 19:41:11.290819  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:41:11.324791  565798 logs.go:123] Gathering logs for kubelet ...
	I1027 19:41:11.324821  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:41:11.416408  565798 logs.go:123] Gathering logs for dmesg ...
	I1027 19:41:11.416449  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	
	
	==> CRI-O <==
	Oct 27 19:40:44 old-k8s-version-468959 crio[559]: time="2025-10-27T19:40:44.012903179Z" level=info msg="Started container" PID=1711 containerID=b9692750c6802429c9250e188f4cf6dc0f0f123f6df32b84aa4a245a6bd40e60 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r6m7z/dashboard-metrics-scraper id=6a49e85b-09b5-4b56-8b92-37088888886a name=/runtime.v1.RuntimeService/StartContainer sandboxID=4c06fe9042f844f8cc92426ff042906b7930e5890f0ce1c496f1bef4d7484525
	Oct 27 19:40:44 old-k8s-version-468959 crio[559]: time="2025-10-27T19:40:44.971208085Z" level=info msg="Removing container: 88a7fe8d90dc09d19e5b3221783bb4d018b72eab2e09644a80d5946dc283df4f" id=d896d53b-3c84-4722-a797-5b8faa113adc name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 19:40:44 old-k8s-version-468959 crio[559]: time="2025-10-27T19:40:44.981119884Z" level=info msg="Removed container 88a7fe8d90dc09d19e5b3221783bb4d018b72eab2e09644a80d5946dc283df4f: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r6m7z/dashboard-metrics-scraper" id=d896d53b-3c84-4722-a797-5b8faa113adc name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 19:40:53 old-k8s-version-468959 crio[559]: time="2025-10-27T19:40:53.998944519Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=fafbc5a4-3b86-42e7-9fb0-8414a7e3c841 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:40:53 old-k8s-version-468959 crio[559]: time="2025-10-27T19:40:53.999951803Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=50ee6f10-5b6d-436f-8d3e-5255248156f2 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:40:54 old-k8s-version-468959 crio[559]: time="2025-10-27T19:40:54.001075021Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=768c9b66-4c3e-4313-9356-b9a6c081ab7d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:40:54 old-k8s-version-468959 crio[559]: time="2025-10-27T19:40:54.001241918Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:40:54 old-k8s-version-468959 crio[559]: time="2025-10-27T19:40:54.005557701Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:40:54 old-k8s-version-468959 crio[559]: time="2025-10-27T19:40:54.005722045Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e4e43d260d563103604459ec80968feac7c8fb32183b206786ae33286baf8194/merged/etc/passwd: no such file or directory"
	Oct 27 19:40:54 old-k8s-version-468959 crio[559]: time="2025-10-27T19:40:54.005756969Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e4e43d260d563103604459ec80968feac7c8fb32183b206786ae33286baf8194/merged/etc/group: no such file or directory"
	Oct 27 19:40:54 old-k8s-version-468959 crio[559]: time="2025-10-27T19:40:54.006024507Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:40:54 old-k8s-version-468959 crio[559]: time="2025-10-27T19:40:54.032018891Z" level=info msg="Created container c3d66e2dd322da5d8554d09ea3b176065c6fe4ba6f6c1b0ca6612474fc69cd91: kube-system/storage-provisioner/storage-provisioner" id=768c9b66-4c3e-4313-9356-b9a6c081ab7d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:40:54 old-k8s-version-468959 crio[559]: time="2025-10-27T19:40:54.032760063Z" level=info msg="Starting container: c3d66e2dd322da5d8554d09ea3b176065c6fe4ba6f6c1b0ca6612474fc69cd91" id=40f69b43-afb8-41b0-9211-2b588732b30a name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:40:54 old-k8s-version-468959 crio[559]: time="2025-10-27T19:40:54.034751277Z" level=info msg="Started container" PID=1725 containerID=c3d66e2dd322da5d8554d09ea3b176065c6fe4ba6f6c1b0ca6612474fc69cd91 description=kube-system/storage-provisioner/storage-provisioner id=40f69b43-afb8-41b0-9211-2b588732b30a name=/runtime.v1.RuntimeService/StartContainer sandboxID=63843b39a74258d7067907dc8e5efbf510e1bcf9cb69eec1e73c46a76826e306
	Oct 27 19:41:00 old-k8s-version-468959 crio[559]: time="2025-10-27T19:41:00.8481855Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=199f133c-154c-4b1d-8820-eccce23ac539 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:41:00 old-k8s-version-468959 crio[559]: time="2025-10-27T19:41:00.849522098Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=dcb67d5d-4471-4ea2-9339-fc408698e879 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:41:00 old-k8s-version-468959 crio[559]: time="2025-10-27T19:41:00.850962902Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r6m7z/dashboard-metrics-scraper" id=5370fdc5-fd91-4381-97e1-bebdcd568dc2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:41:00 old-k8s-version-468959 crio[559]: time="2025-10-27T19:41:00.851162264Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:41:00 old-k8s-version-468959 crio[559]: time="2025-10-27T19:41:00.85928152Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:41:00 old-k8s-version-468959 crio[559]: time="2025-10-27T19:41:00.859966326Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:41:00 old-k8s-version-468959 crio[559]: time="2025-10-27T19:41:00.891422673Z" level=info msg="Created container f90740a0e28b478c1a0658aadb18b23d89ba64b844c2ab857f4e83834b57f69b: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r6m7z/dashboard-metrics-scraper" id=5370fdc5-fd91-4381-97e1-bebdcd568dc2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:41:00 old-k8s-version-468959 crio[559]: time="2025-10-27T19:41:00.893076636Z" level=info msg="Starting container: f90740a0e28b478c1a0658aadb18b23d89ba64b844c2ab857f4e83834b57f69b" id=754aad9a-e851-499d-8fb7-4a6554e69ebc name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:41:00 old-k8s-version-468959 crio[559]: time="2025-10-27T19:41:00.896190457Z" level=info msg="Started container" PID=1759 containerID=f90740a0e28b478c1a0658aadb18b23d89ba64b844c2ab857f4e83834b57f69b description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r6m7z/dashboard-metrics-scraper id=754aad9a-e851-499d-8fb7-4a6554e69ebc name=/runtime.v1.RuntimeService/StartContainer sandboxID=4c06fe9042f844f8cc92426ff042906b7930e5890f0ce1c496f1bef4d7484525
	Oct 27 19:41:01 old-k8s-version-468959 crio[559]: time="2025-10-27T19:41:01.0227682Z" level=info msg="Removing container: b9692750c6802429c9250e188f4cf6dc0f0f123f6df32b84aa4a245a6bd40e60" id=3490c600-37c2-4de8-9d01-b33c24c39cc9 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 19:41:01 old-k8s-version-468959 crio[559]: time="2025-10-27T19:41:01.036901688Z" level=info msg="Removed container b9692750c6802429c9250e188f4cf6dc0f0f123f6df32b84aa4a245a6bd40e60: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r6m7z/dashboard-metrics-scraper" id=3490c600-37c2-4de8-9d01-b33c24c39cc9 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	f90740a0e28b4       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago      Exited              dashboard-metrics-scraper   2                   4c06fe9042f84       dashboard-metrics-scraper-5f989dc9cf-r6m7z       kubernetes-dashboard
	c3d66e2dd322d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   63843b39a7425       storage-provisioner                              kube-system
	12d4f512371d8       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   35 seconds ago      Running             kubernetes-dashboard        0                   7e18f17c481dd       kubernetes-dashboard-8694d4445c-mb5fm            kubernetes-dashboard
	2e436d82f10c9       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           52 seconds ago      Running             coredns                     0                   10f57f4658683       coredns-5dd5756b68-xwmdt                         kube-system
	32ab77e9658d7       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   f9388ad762f5b       kindnet-td5zb                                    kube-system
	b0e7588da17af       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   868ff34ed020a       busybox                                          default
	2f249517b99ac       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   63843b39a7425       storage-provisioner                              kube-system
	b928c935db399       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           52 seconds ago      Running             kube-proxy                  0                   df65001b83cda       kube-proxy-tjbth                                 kube-system
	bbf4fe7bcb1ee       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           55 seconds ago      Running             etcd                        0                   cff067560d1de       etcd-old-k8s-version-468959                      kube-system
	07e72855c00ee       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           55 seconds ago      Running             kube-apiserver              0                   576f1b92ea461       kube-apiserver-old-k8s-version-468959            kube-system
	ef7e54548205b       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           55 seconds ago      Running             kube-scheduler              0                   2b08074662e53       kube-scheduler-old-k8s-version-468959            kube-system
	1415820809db8       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           55 seconds ago      Running             kube-controller-manager     0                   6a67ba3219763       kube-controller-manager-old-k8s-version-468959   kube-system
	
	
	==> coredns [2e436d82f10c9ab337c97fc80696a734a66eb15691f23ff94fdd4ad91ff89df5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46993 - 8781 "HINFO IN 7570531223480349424.7523887348228158236. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.087703845s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-468959
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-468959
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=old-k8s-version-468959
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T19_39_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 19:39:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-468959
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 19:41:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 19:40:53 +0000   Mon, 27 Oct 2025 19:39:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 19:40:53 +0000   Mon, 27 Oct 2025 19:39:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 19:40:53 +0000   Mon, 27 Oct 2025 19:39:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 19:40:53 +0000   Mon, 27 Oct 2025 19:39:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-468959
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                2befee2f-4a53-4846-b84d-35620b9685cc
	  Boot ID:                    811bd29c-e64e-4acc-9427-bab1f7caed93
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-5dd5756b68-xwmdt                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-old-k8s-version-468959                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m1s
	  kube-system                 kindnet-td5zb                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-old-k8s-version-468959             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-controller-manager-old-k8s-version-468959    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-proxy-tjbth                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-old-k8s-version-468959             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-r6m7z        0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-mb5fm             0 (0%)        0 (0%)      0 (0%)           0 (0%)         41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 107s               kube-proxy       
	  Normal  Starting                 52s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m1s               kubelet          Node old-k8s-version-468959 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s               kubelet          Node old-k8s-version-468959 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m1s               kubelet          Node old-k8s-version-468959 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m1s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s               node-controller  Node old-k8s-version-468959 event: Registered Node old-k8s-version-468959 in Controller
	  Normal  NodeReady                95s                kubelet          Node old-k8s-version-468959 status is now: NodeReady
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)  kubelet          Node old-k8s-version-468959 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)  kubelet          Node old-k8s-version-468959 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)  kubelet          Node old-k8s-version-468959 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           41s                node-controller  Node old-k8s-version-468959 event: Registered Node old-k8s-version-468959 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 23 52 43 9a ba 08 06
	[  +0.000398] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 50 95 0e df 53 08 06
	[Oct27 18:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.017295] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023893] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023882] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023851] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +2.047849] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +4.031592] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +8.319143] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[ +16.382183] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[Oct27 19:00] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	
	
	==> etcd [bbf4fe7bcb1eef6c19d02157f5f9d45ada6d926195550b86406cb27a478cb520] <==
	{"level":"info","ts":"2025-10-27T19:40:20.462606Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-27T19:40:20.462678Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-27T19:40:20.462766Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-27T19:40:20.462795Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-27T19:40:20.465695Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-27T19:40:20.465935Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-27T19:40:20.465964Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-27T19:40:20.466022Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-27T19:40:20.466108Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-27T19:40:20.466603Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"9f0758e1c58a86ed","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-10-27T19:40:21.25022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-27T19:40:21.250411Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-27T19:40:21.250448Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-27T19:40:21.250471Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-27T19:40:21.250481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-27T19:40:21.250492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-27T19:40:21.250502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-27T19:40:21.252271Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-468959 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-27T19:40:21.252371Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-27T19:40:21.252576Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-27T19:40:21.252842Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-27T19:40:21.252616Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-27T19:40:21.256986Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-27T19:40:21.258017Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-27T19:40:30.116082Z","caller":"traceutil/trace.go:171","msg":"trace[1590140253] transaction","detail":"{read_only:false; response_revision:472; number_of_response:1; }","duration":"105.616202ms","start":"2025-10-27T19:40:30.01043Z","end":"2025-10-27T19:40:30.116046Z","steps":["trace[1590140253] 'process raft request'  (duration: 49.766453ms)","trace[1590140253] 'compare'  (duration: 55.717221ms)"],"step_count":2}
	
	
	==> kernel <==
	 19:41:16 up  2:23,  0 user,  load average: 2.61, 3.03, 2.01
	Linux old-k8s-version-468959 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [32ab77e9658d711ddb17ba898beed6884dc70565b485a14e92a38be93a33d1da] <==
	I1027 19:40:23.495086       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 19:40:23.495435       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1027 19:40:23.495611       1 main.go:148] setting mtu 1500 for CNI 
	I1027 19:40:23.495629       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 19:40:23.495647       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T19:40:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 19:40:23.750348       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 19:40:23.750384       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 19:40:23.750396       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 19:40:23.750773       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1027 19:40:24.051002       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 19:40:24.051035       1 metrics.go:72] Registering metrics
	I1027 19:40:24.051109       1 controller.go:711] "Syncing nftables rules"
	I1027 19:40:33.702439       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 19:40:33.702512       1 main.go:301] handling current node
	I1027 19:40:43.701255       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 19:40:43.701310       1 main.go:301] handling current node
	I1027 19:40:53.701209       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 19:40:53.701280       1 main.go:301] handling current node
	I1027 19:41:03.701244       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 19:41:03.701287       1 main.go:301] handling current node
	I1027 19:41:13.702993       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 19:41:13.703043       1 main.go:301] handling current node
	
	
	==> kube-apiserver [07e72855c00ee996d65390930e95dec1dbf22e238c37a44a46a98ed17c3b0651] <==
	I1027 19:40:22.735773       1 apf_controller.go:372] Starting API Priority and Fairness config controller
	I1027 19:40:22.795563       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 19:40:22.833024       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1027 19:40:22.834222       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1027 19:40:22.834717       1 shared_informer.go:318] Caches are synced for configmaps
	I1027 19:40:22.834725       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1027 19:40:22.834742       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1027 19:40:22.834959       1 aggregator.go:166] initial CRD sync complete...
	I1027 19:40:22.834970       1 autoregister_controller.go:141] Starting autoregister controller
	I1027 19:40:22.834977       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 19:40:22.834986       1 cache.go:39] Caches are synced for autoregister controller
	I1027 19:40:22.835979       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1027 19:40:22.836008       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1027 19:40:22.859560       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1027 19:40:23.737706       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 19:40:23.761435       1 controller.go:624] quota admission added evaluator for: namespaces
	I1027 19:40:23.798416       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1027 19:40:23.820098       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 19:40:23.831196       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 19:40:23.845740       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1027 19:40:23.925856       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.116.249"}
	I1027 19:40:23.989744       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.192.230"}
	I1027 19:40:35.069098       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1027 19:40:35.518922       1 controller.go:624] quota admission added evaluator for: endpoints
	I1027 19:40:35.570388       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [1415820809db89899722d08ef65bea69fc0e930dddf7cc3246da3d0cf8f8ca35] <==
	I1027 19:40:35.475813       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.429µs"
	I1027 19:40:35.476905       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-mb5fm"
	I1027 19:40:35.480011       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-r6m7z"
	I1027 19:40:35.489271       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="415.294345ms"
	I1027 19:40:35.489598       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="416.723355ms"
	I1027 19:40:35.495538       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="5.81027ms"
	I1027 19:40:35.495648       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="61.673µs"
	I1027 19:40:35.498563       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="64.852µs"
	I1027 19:40:35.501498       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="12.151382ms"
	I1027 19:40:35.501599       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.573µs"
	I1027 19:40:35.501643       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="27.057µs"
	I1027 19:40:35.513063       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="68.037µs"
	I1027 19:40:35.576337       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	I1027 19:40:35.595441       1 shared_informer.go:318] Caches are synced for garbage collector
	I1027 19:40:35.626947       1 shared_informer.go:318] Caches are synced for garbage collector
	I1027 19:40:35.626980       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1027 19:40:40.996226       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="22.796418ms"
	I1027 19:40:40.996345       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="57.183µs"
	I1027 19:40:43.981035       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="59.837µs"
	I1027 19:40:44.982095       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.046µs"
	I1027 19:40:45.988026       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="140.256µs"
	I1027 19:40:58.963202       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.104767ms"
	I1027 19:40:58.963340       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="81.542µs"
	I1027 19:41:01.035018       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="70.99µs"
	I1027 19:41:05.799533       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="124.267µs"
	
	
	==> kube-proxy [b928c935db3996d4e2c0bd1959759b9d8b29154925458393549fc24c4cf387fb] <==
	I1027 19:40:23.263631       1 server_others.go:69] "Using iptables proxy"
	I1027 19:40:23.274997       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1027 19:40:23.299722       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 19:40:23.302779       1 server_others.go:152] "Using iptables Proxier"
	I1027 19:40:23.302822       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1027 19:40:23.302833       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1027 19:40:23.302893       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1027 19:40:23.303245       1 server.go:846] "Version info" version="v1.28.0"
	I1027 19:40:23.303346       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:40:23.304057       1 config.go:188] "Starting service config controller"
	I1027 19:40:23.304497       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1027 19:40:23.304075       1 config.go:97] "Starting endpoint slice config controller"
	I1027 19:40:23.304150       1 config.go:315] "Starting node config controller"
	I1027 19:40:23.307446       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1027 19:40:23.307493       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1027 19:40:23.408199       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1027 19:40:23.408306       1 shared_informer.go:318] Caches are synced for service config
	I1027 19:40:23.408228       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [ef7e54548205b2d8355417aebc97fb016764235b2b1f28d56a8dd8368f3a58d8] <==
	I1027 19:40:20.953035       1 serving.go:348] Generated self-signed cert in-memory
	W1027 19:40:22.778808       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1027 19:40:22.778851       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1027 19:40:22.778868       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1027 19:40:22.778878       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1027 19:40:22.800852       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1027 19:40:22.800890       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:40:22.803363       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:40:22.803404       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1027 19:40:22.807681       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1027 19:40:22.807962       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1027 19:40:22.906335       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 27 19:40:35 old-k8s-version-468959 kubelet[708]: I1027 19:40:35.485716     708 topology_manager.go:215] "Topology Admit Handler" podUID="1c0e5f44-78ae-4b68-8df4-33d4ff6c4980" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-r6m7z"
	Oct 27 19:40:35 old-k8s-version-468959 kubelet[708]: I1027 19:40:35.568511     708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/aa553d39-b345-4aaa-badc-a7f124972284-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-mb5fm\" (UID: \"aa553d39-b345-4aaa-badc-a7f124972284\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mb5fm"
	Oct 27 19:40:35 old-k8s-version-468959 kubelet[708]: I1027 19:40:35.568712     708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hndz\" (UniqueName: \"kubernetes.io/projected/1c0e5f44-78ae-4b68-8df4-33d4ff6c4980-kube-api-access-5hndz\") pod \"dashboard-metrics-scraper-5f989dc9cf-r6m7z\" (UID: \"1c0e5f44-78ae-4b68-8df4-33d4ff6c4980\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r6m7z"
	Oct 27 19:40:35 old-k8s-version-468959 kubelet[708]: I1027 19:40:35.568764     708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1c0e5f44-78ae-4b68-8df4-33d4ff6c4980-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-r6m7z\" (UID: \"1c0e5f44-78ae-4b68-8df4-33d4ff6c4980\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r6m7z"
	Oct 27 19:40:35 old-k8s-version-468959 kubelet[708]: I1027 19:40:35.568885     708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4zcd\" (UniqueName: \"kubernetes.io/projected/aa553d39-b345-4aaa-badc-a7f124972284-kube-api-access-w4zcd\") pod \"kubernetes-dashboard-8694d4445c-mb5fm\" (UID: \"aa553d39-b345-4aaa-badc-a7f124972284\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mb5fm"
	Oct 27 19:40:43 old-k8s-version-468959 kubelet[708]: I1027 19:40:43.965854     708 scope.go:117] "RemoveContainer" containerID="88a7fe8d90dc09d19e5b3221783bb4d018b72eab2e09644a80d5946dc283df4f"
	Oct 27 19:40:43 old-k8s-version-468959 kubelet[708]: I1027 19:40:43.981030     708 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mb5fm" podStartSLOduration=4.638243419 podCreationTimestamp="2025-10-27 19:40:35 +0000 UTC" firstStartedPulling="2025-10-27 19:40:35.809035922 +0000 UTC m=+16.112745207" lastFinishedPulling="2025-10-27 19:40:40.151742571 +0000 UTC m=+20.455451865" observedRunningTime="2025-10-27 19:40:40.974701804 +0000 UTC m=+21.278411106" watchObservedRunningTime="2025-10-27 19:40:43.980950077 +0000 UTC m=+24.284659373"
	Oct 27 19:40:44 old-k8s-version-468959 kubelet[708]: I1027 19:40:44.969822     708 scope.go:117] "RemoveContainer" containerID="88a7fe8d90dc09d19e5b3221783bb4d018b72eab2e09644a80d5946dc283df4f"
	Oct 27 19:40:44 old-k8s-version-468959 kubelet[708]: I1027 19:40:44.969997     708 scope.go:117] "RemoveContainer" containerID="b9692750c6802429c9250e188f4cf6dc0f0f123f6df32b84aa4a245a6bd40e60"
	Oct 27 19:40:44 old-k8s-version-468959 kubelet[708]: E1027 19:40:44.970416     708 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-r6m7z_kubernetes-dashboard(1c0e5f44-78ae-4b68-8df4-33d4ff6c4980)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r6m7z" podUID="1c0e5f44-78ae-4b68-8df4-33d4ff6c4980"
	Oct 27 19:40:45 old-k8s-version-468959 kubelet[708]: I1027 19:40:45.976965     708 scope.go:117] "RemoveContainer" containerID="b9692750c6802429c9250e188f4cf6dc0f0f123f6df32b84aa4a245a6bd40e60"
	Oct 27 19:40:45 old-k8s-version-468959 kubelet[708]: E1027 19:40:45.977272     708 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-r6m7z_kubernetes-dashboard(1c0e5f44-78ae-4b68-8df4-33d4ff6c4980)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r6m7z" podUID="1c0e5f44-78ae-4b68-8df4-33d4ff6c4980"
	Oct 27 19:40:46 old-k8s-version-468959 kubelet[708]: I1027 19:40:46.979967     708 scope.go:117] "RemoveContainer" containerID="b9692750c6802429c9250e188f4cf6dc0f0f123f6df32b84aa4a245a6bd40e60"
	Oct 27 19:40:46 old-k8s-version-468959 kubelet[708]: E1027 19:40:46.980247     708 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-r6m7z_kubernetes-dashboard(1c0e5f44-78ae-4b68-8df4-33d4ff6c4980)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r6m7z" podUID="1c0e5f44-78ae-4b68-8df4-33d4ff6c4980"
	Oct 27 19:40:53 old-k8s-version-468959 kubelet[708]: I1027 19:40:53.998389     708 scope.go:117] "RemoveContainer" containerID="2f249517b99aca10f8d7cbf2e67e155472a7f47554aaf0bd3f1fe9dc0c41d3f7"
	Oct 27 19:41:00 old-k8s-version-468959 kubelet[708]: I1027 19:41:00.847354     708 scope.go:117] "RemoveContainer" containerID="b9692750c6802429c9250e188f4cf6dc0f0f123f6df32b84aa4a245a6bd40e60"
	Oct 27 19:41:01 old-k8s-version-468959 kubelet[708]: I1027 19:41:01.021375     708 scope.go:117] "RemoveContainer" containerID="b9692750c6802429c9250e188f4cf6dc0f0f123f6df32b84aa4a245a6bd40e60"
	Oct 27 19:41:01 old-k8s-version-468959 kubelet[708]: I1027 19:41:01.021613     708 scope.go:117] "RemoveContainer" containerID="f90740a0e28b478c1a0658aadb18b23d89ba64b844c2ab857f4e83834b57f69b"
	Oct 27 19:41:01 old-k8s-version-468959 kubelet[708]: E1027 19:41:01.022006     708 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-r6m7z_kubernetes-dashboard(1c0e5f44-78ae-4b68-8df4-33d4ff6c4980)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r6m7z" podUID="1c0e5f44-78ae-4b68-8df4-33d4ff6c4980"
	Oct 27 19:41:05 old-k8s-version-468959 kubelet[708]: I1027 19:41:05.788435     708 scope.go:117] "RemoveContainer" containerID="f90740a0e28b478c1a0658aadb18b23d89ba64b844c2ab857f4e83834b57f69b"
	Oct 27 19:41:05 old-k8s-version-468959 kubelet[708]: E1027 19:41:05.788759     708 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-r6m7z_kubernetes-dashboard(1c0e5f44-78ae-4b68-8df4-33d4ff6c4980)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r6m7z" podUID="1c0e5f44-78ae-4b68-8df4-33d4ff6c4980"
	Oct 27 19:41:12 old-k8s-version-468959 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 19:41:12 old-k8s-version-468959 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 19:41:12 old-k8s-version-468959 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 27 19:41:12 old-k8s-version-468959 systemd[1]: kubelet.service: Consumed 1.722s CPU time.
	
	
	==> kubernetes-dashboard [12d4f512371d8f5ce0f213cf3965c8a627febbdcc48831c69b8f3313bbdf87af] <==
	2025/10/27 19:40:40 Starting overwatch
	2025/10/27 19:40:40 Using namespace: kubernetes-dashboard
	2025/10/27 19:40:40 Using in-cluster config to connect to apiserver
	2025/10/27 19:40:40 Using secret token for csrf signing
	2025/10/27 19:40:40 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/27 19:40:40 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/27 19:40:40 Successful initial request to the apiserver, version: v1.28.0
	2025/10/27 19:40:40 Generating JWE encryption key
	2025/10/27 19:40:40 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/27 19:40:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/27 19:40:40 Initializing JWE encryption key from synchronized object
	2025/10/27 19:40:40 Creating in-cluster Sidecar client
	2025/10/27 19:40:40 Serving insecurely on HTTP port: 9090
	2025/10/27 19:40:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 19:41:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [2f249517b99aca10f8d7cbf2e67e155472a7f47554aaf0bd3f1fe9dc0c41d3f7] <==
	I1027 19:40:23.207718       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1027 19:40:53.211754       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [c3d66e2dd322da5d8554d09ea3b176065c6fe4ba6f6c1b0ca6612474fc69cd91] <==
	I1027 19:40:54.047329       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1027 19:40:54.055847       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 19:40:54.055888       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1027 19:41:11.457368       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 19:41:11.457524       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"677bc0f8-1050-43ba-894e-0ebdacb32030", APIVersion:"v1", ResourceVersion:"623", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-468959_5168af45-d2fa-46b2-bc4a-7e149f799f2c became leader
	I1027 19:41:11.457604       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-468959_5168af45-d2fa-46b2-bc4a-7e149f799f2c!
	I1027 19:41:11.558187       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-468959_5168af45-d2fa-46b2-bc4a-7e149f799f2c!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-468959 -n old-k8s-version-468959
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-468959 -n old-k8s-version-468959: exit status 2 (371.78926ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-468959 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-468959
helpers_test.go:243: (dbg) docker inspect old-k8s-version-468959:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2e0353db62d926cc83bef0d3fa107c768d6d452b830c383908ae17268301278e",
	        "Created": "2025-10-27T19:38:59.515462878Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 585024,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T19:40:13.071868777Z",
	            "FinishedAt": "2025-10-27T19:40:12.058504283Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/2e0353db62d926cc83bef0d3fa107c768d6d452b830c383908ae17268301278e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2e0353db62d926cc83bef0d3fa107c768d6d452b830c383908ae17268301278e/hostname",
	        "HostsPath": "/var/lib/docker/containers/2e0353db62d926cc83bef0d3fa107c768d6d452b830c383908ae17268301278e/hosts",
	        "LogPath": "/var/lib/docker/containers/2e0353db62d926cc83bef0d3fa107c768d6d452b830c383908ae17268301278e/2e0353db62d926cc83bef0d3fa107c768d6d452b830c383908ae17268301278e-json.log",
	        "Name": "/old-k8s-version-468959",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-468959:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-468959",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2e0353db62d926cc83bef0d3fa107c768d6d452b830c383908ae17268301278e",
	                "LowerDir": "/var/lib/docker/overlay2/ce8ba90743d105752eb907923a1422d963b8a7959aac8ff55c461d4eb853b209-init/diff:/var/lib/docker/overlay2/71b61ec94610a35f2d924dec358052d4c154c36b3fe219802f60246ca2dc7f45/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ce8ba90743d105752eb907923a1422d963b8a7959aac8ff55c461d4eb853b209/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ce8ba90743d105752eb907923a1422d963b8a7959aac8ff55c461d4eb853b209/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ce8ba90743d105752eb907923a1422d963b8a7959aac8ff55c461d4eb853b209/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-468959",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-468959/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-468959",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-468959",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-468959",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5d361cca06cc890a42668988ef8b50ed4dbf136e7bb39c84b11dd19440fb41b0",
	            "SandboxKey": "/var/run/docker/netns/5d361cca06cc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-468959": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:5e:a2:03:69:13",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0308d3f30614fde66189d573d65372f0d31056c699858ced2c5f17d155a2bb0c",
	                    "EndpointID": "e64542148d7f9afba07e099a8877475585ce3c508de9b014647a749f24271a36",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-468959",
	                        "2e0353db62d9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-468959 -n old-k8s-version-468959
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-468959 -n old-k8s-version-468959: exit status 2 (349.785397ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-468959 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-468959 logs -n 25: (1.397705182s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                              ARGS                                                                               │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh       │ functional-051715 ssh echo hello                                                                                                                                │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ ssh       │ functional-051715 ssh cat /etc/hostname                                                                                                                         │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ tunnel    │ functional-051715 tunnel --alsologtostderr                                                                                                                      │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │                     │
	│ tunnel    │ functional-051715 tunnel --alsologtostderr                                                                                                                      │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │                     │
	│ stop      │ -p embed-certs-919237 --alsologtostderr -v=3                                                                                                                    │ embed-certs-919237     │ jenkins │ v1.37.0 │ 27 Oct 25 19:40 UTC │ 27 Oct 25 19:41 UTC │
	│ tunnel    │ functional-051715 tunnel --alsologtostderr                                                                                                                      │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-051715 --alsologtostderr -v=1                                                                                                  │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ start     │ -p functional-051715 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                                       │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │                     │
	│ start     │ -p functional-051715 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                 │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │                     │
	│ addons    │ functional-051715 addons list                                                                                                                                   │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ addons    │ functional-051715 addons list -o json                                                                                                                           │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image     │ functional-051715 image load --daemon kicbase/echo-server:functional-051715 --alsologtostderr                                                                   │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image     │ functional-051715 image ls                                                                                                                                      │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image     │ functional-051715 image load --daemon kicbase/echo-server:functional-051715 --alsologtostderr                                                                   │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image     │ functional-051715 image ls                                                                                                                                      │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image     │ functional-051715 image load --daemon kicbase/echo-server:functional-051715 --alsologtostderr                                                                   │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image     │ functional-051715 image ls                                                                                                                                      │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image     │ functional-051715 image save kicbase/echo-server:functional-051715 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image     │ functional-051715 image rm kicbase/echo-server:functional-051715 --alsologtostderr                                                                              │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ addons    │ enable dashboard -p embed-certs-919237 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                   │ embed-certs-919237     │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ start     │ -p embed-certs-919237 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1          │ embed-certs-919237     │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │                     │
	│ image     │ old-k8s-version-468959 image list --format=json                                                                                                                 │ old-k8s-version-468959 │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ pause     │ -p old-k8s-version-468959 --alsologtostderr -v=1                                                                                                                │ old-k8s-version-468959 │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │                     │
	│ addons    │ enable metrics-server -p no-preload-095885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                         │ no-preload-095885      │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │                     │
	│ stop      │ -p no-preload-095885 --alsologtostderr -v=3                                                                                                                     │ no-preload-095885      │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 19:41:00
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 19:41:00.814297  594803 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:41:00.814654  594803 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:41:00.814666  594803 out.go:374] Setting ErrFile to fd 2...
	I1027 19:41:00.814672  594803 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:41:00.815019  594803 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 19:41:00.815611  594803 out.go:368] Setting JSON to false
	I1027 19:41:00.819938  594803 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8610,"bootTime":1761585451,"procs":357,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 19:41:00.820105  594803 start.go:141] virtualization: kvm guest
	I1027 19:41:00.822276  594803 out.go:179] * [embed-certs-919237] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 19:41:00.824552  594803 notify.go:220] Checking for updates...
	I1027 19:41:00.824589  594803 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 19:41:00.825920  594803 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 19:41:00.827493  594803 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:41:00.829068  594803 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-352833/.minikube
	I1027 19:41:00.830346  594803 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 19:41:00.831676  594803 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 19:41:00.833634  594803 config.go:182] Loaded profile config "embed-certs-919237": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:41:00.834328  594803 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 19:41:00.865817  594803 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 19:41:00.865940  594803 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:41:00.939681  594803 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-27 19:41:00.928512266 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:41:00.939791  594803 docker.go:318] overlay module found
	I1027 19:41:00.942901  594803 out.go:179] * Using the docker driver based on existing profile
	I1027 19:41:00.944254  594803 start.go:305] selected driver: docker
	I1027 19:41:00.944276  594803 start.go:925] validating driver "docker" against &{Name:embed-certs-919237 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-919237 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:41:00.944438  594803 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 19:41:00.945045  594803 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:41:01.009596  594803 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-27 19:41:00.998454107 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:41:01.009899  594803 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 19:41:01.009935  594803 cni.go:84] Creating CNI manager for ""
	I1027 19:41:01.009994  594803 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:41:01.010033  594803 start.go:349] cluster config:
	{Name:embed-certs-919237 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-919237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:41:01.012102  594803 out.go:179] * Starting "embed-certs-919237" primary control-plane node in "embed-certs-919237" cluster
	I1027 19:41:01.013642  594803 cache.go:123] Beginning downloading kic base image for docker with crio
	I1027 19:41:01.015027  594803 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 19:41:01.016245  594803 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:41:01.016338  594803 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1027 19:41:01.016364  594803 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 19:41:01.016374  594803 cache.go:58] Caching tarball of preloaded images
	I1027 19:41:01.016491  594803 preload.go:233] Found /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1027 19:41:01.016508  594803 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 19:41:01.016671  594803 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/embed-certs-919237/config.json ...
	I1027 19:41:01.043736  594803 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 19:41:01.043771  594803 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 19:41:01.043794  594803 cache.go:232] Successfully downloaded all kic artifacts
	I1027 19:41:01.043828  594803 start.go:360] acquireMachinesLock for embed-certs-919237: {Name:mka6dd5e9788015cfc40a76e0480af6167e6c17e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:41:01.043925  594803 start.go:364] duration metric: took 53.412µs to acquireMachinesLock for "embed-certs-919237"
	I1027 19:41:01.043948  594803 start.go:96] Skipping create...Using existing machine configuration
	I1027 19:41:01.043956  594803 fix.go:54] fixHost starting: 
	I1027 19:41:01.044294  594803 cli_runner.go:164] Run: docker container inspect embed-certs-919237 --format={{.State.Status}}
	I1027 19:41:01.063875  594803 fix.go:112] recreateIfNeeded on embed-certs-919237: state=Stopped err=<nil>
	W1027 19:41:01.063922  594803 fix.go:138] unexpected machine state, will restart: <nil>
	I1027 19:40:58.026030  565798 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1027 19:40:58.026613  565798 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1027 19:40:58.026685  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:40:58.026737  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:40:58.057129  565798 cri.go:89] found id: "f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:40:58.057167  565798 cri.go:89] found id: ""
	I1027 19:40:58.057177  565798 logs.go:282] 1 containers: [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8]
	I1027 19:40:58.057246  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:40:58.061704  565798 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:40:58.061775  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:40:58.090405  565798 cri.go:89] found id: ""
	I1027 19:40:58.090438  565798 logs.go:282] 0 containers: []
	W1027 19:40:58.090450  565798 logs.go:284] No container was found matching "etcd"
	I1027 19:40:58.090459  565798 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:40:58.090524  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:40:58.120023  565798 cri.go:89] found id: ""
	I1027 19:40:58.120053  565798 logs.go:282] 0 containers: []
	W1027 19:40:58.120064  565798 logs.go:284] No container was found matching "coredns"
	I1027 19:40:58.120074  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:40:58.120150  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:40:58.150017  565798 cri.go:89] found id: "15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:40:58.150043  565798 cri.go:89] found id: ""
	I1027 19:40:58.150052  565798 logs.go:282] 1 containers: [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8]
	I1027 19:40:58.150108  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:40:58.154647  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:40:58.154712  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:40:58.183854  565798 cri.go:89] found id: ""
	I1027 19:40:58.183879  565798 logs.go:282] 0 containers: []
	W1027 19:40:58.183888  565798 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:40:58.183894  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:40:58.183943  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:40:58.212083  565798 cri.go:89] found id: "38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:40:58.212102  565798 cri.go:89] found id: "df060ac929bc7a5dac337c7e85e10b2f4a51413be70b8202c8307826c4a72947"
	I1027 19:40:58.212106  565798 cri.go:89] found id: ""
	I1027 19:40:58.212114  565798 logs.go:282] 2 containers: [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77 df060ac929bc7a5dac337c7e85e10b2f4a51413be70b8202c8307826c4a72947]
	I1027 19:40:58.212185  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:40:58.216480  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:40:58.220450  565798 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:40:58.220522  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:40:58.249431  565798 cri.go:89] found id: ""
	I1027 19:40:58.249455  565798 logs.go:282] 0 containers: []
	W1027 19:40:58.249463  565798 logs.go:284] No container was found matching "kindnet"
	I1027 19:40:58.249469  565798 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:40:58.249515  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:40:58.278301  565798 cri.go:89] found id: ""
	I1027 19:40:58.278327  565798 logs.go:282] 0 containers: []
	W1027 19:40:58.278334  565798 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:40:58.278352  565798 logs.go:123] Gathering logs for kubelet ...
	I1027 19:40:58.278366  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:40:58.361232  565798 logs.go:123] Gathering logs for dmesg ...
	I1027 19:40:58.361276  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:40:58.384714  565798 logs.go:123] Gathering logs for kube-controller-manager [df060ac929bc7a5dac337c7e85e10b2f4a51413be70b8202c8307826c4a72947] ...
	I1027 19:40:58.384753  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 df060ac929bc7a5dac337c7e85e10b2f4a51413be70b8202c8307826c4a72947"
	I1027 19:40:58.415348  565798 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:40:58.415382  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:40:58.463651  565798 logs.go:123] Gathering logs for container status ...
	I1027 19:40:58.463690  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:40:58.498078  565798 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:40:58.498125  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:40:58.558995  565798 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:40:58.559018  565798 logs.go:123] Gathering logs for kube-apiserver [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8] ...
	I1027 19:40:58.559035  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:40:58.594584  565798 logs.go:123] Gathering logs for kube-scheduler [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8] ...
	I1027 19:40:58.594625  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:40:58.645514  565798 logs.go:123] Gathering logs for kube-controller-manager [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77] ...
	I1027 19:40:58.645551  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:01.178225  565798 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1027 19:41:01.178694  565798 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1027 19:41:01.178745  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:41:01.178791  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:41:01.210901  565798 cri.go:89] found id: "f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:41:01.210925  565798 cri.go:89] found id: ""
	I1027 19:41:01.210936  565798 logs.go:282] 1 containers: [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8]
	I1027 19:41:01.211006  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:01.215571  565798 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:41:01.215658  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:41:01.247466  565798 cri.go:89] found id: ""
	I1027 19:41:01.247503  565798 logs.go:282] 0 containers: []
	W1027 19:41:01.247514  565798 logs.go:284] No container was found matching "etcd"
	I1027 19:41:01.247523  565798 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:41:01.247591  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:41:01.281986  565798 cri.go:89] found id: ""
	I1027 19:41:01.282024  565798 logs.go:282] 0 containers: []
	W1027 19:41:01.282036  565798 logs.go:284] No container was found matching "coredns"
	I1027 19:41:01.282044  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:41:01.282106  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:41:01.312897  565798 cri.go:89] found id: "15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:41:01.312929  565798 cri.go:89] found id: ""
	I1027 19:41:01.312940  565798 logs.go:282] 1 containers: [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8]
	I1027 19:41:01.313010  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:01.317732  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:41:01.317823  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:41:01.349672  565798 cri.go:89] found id: ""
	I1027 19:41:01.349702  565798 logs.go:282] 0 containers: []
	W1027 19:41:01.349714  565798 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:41:01.349722  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:41:01.349783  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:41:01.383805  565798 cri.go:89] found id: "38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:01.383830  565798 cri.go:89] found id: ""
	I1027 19:41:01.383842  565798 logs.go:282] 1 containers: [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77]
	I1027 19:41:01.383906  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:01.388901  565798 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:41:01.388976  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:41:01.421041  565798 cri.go:89] found id: ""
	I1027 19:41:01.421066  565798 logs.go:282] 0 containers: []
	W1027 19:41:01.421074  565798 logs.go:284] No container was found matching "kindnet"
	I1027 19:41:01.421082  565798 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:41:01.421184  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:41:01.451707  565798 cri.go:89] found id: ""
	I1027 19:41:01.451736  565798 logs.go:282] 0 containers: []
	W1027 19:41:01.451744  565798 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:41:01.451754  565798 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:41:01.451766  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:41:01.510573  565798 logs.go:123] Gathering logs for container status ...
	I1027 19:41:01.510618  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1027 19:41:00.819934  585556 node_ready.go:57] node "no-preload-095885" has "Ready":"False" status (will retry)
	I1027 19:41:02.819169  585556 node_ready.go:49] node "no-preload-095885" is "Ready"
	I1027 19:41:02.819209  585556 node_ready.go:38] duration metric: took 13.003808085s for node "no-preload-095885" to be "Ready" ...
	I1027 19:41:02.819229  585556 api_server.go:52] waiting for apiserver process to appear ...
	I1027 19:41:02.819306  585556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:41:02.833188  585556 api_server.go:72] duration metric: took 13.35947841s to wait for apiserver process to appear ...
	I1027 19:41:02.833220  585556 api_server.go:88] waiting for apiserver healthz status ...
	I1027 19:41:02.833241  585556 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 19:41:02.838750  585556 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1027 19:41:02.839890  585556 api_server.go:141] control plane version: v1.34.1
	I1027 19:41:02.839920  585556 api_server.go:131] duration metric: took 6.693245ms to wait for apiserver health ...
	I1027 19:41:02.839930  585556 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 19:41:02.843755  585556 system_pods.go:59] 8 kube-system pods found
	I1027 19:41:02.843791  585556 system_pods.go:61] "coredns-66bc5c9577-gwqvg" [3bcd75c1-f42f-4252-b1fc-2bdab3c8373e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 19:41:02.843797  585556 system_pods.go:61] "etcd-no-preload-095885" [398272ac-d5cc-44d6-bf2a-3469d316b417] Running
	I1027 19:41:02.843803  585556 system_pods.go:61] "kindnet-8lbz5" [42b05fb3-87d3-412f-ac73-cb73a737aab1] Running
	I1027 19:41:02.843807  585556 system_pods.go:61] "kube-apiserver-no-preload-095885" [d609db88-4097-43b5-b881-a445344edf64] Running
	I1027 19:41:02.843811  585556 system_pods.go:61] "kube-controller-manager-no-preload-095885" [b1bfd486-ed1f-4f8b-a08b-de7739f1dd9c] Running
	I1027 19:41:02.843814  585556 system_pods.go:61] "kube-proxy-wz64m" [339cb07c-5319-4d8b-ab61-a6d377c2bc61] Running
	I1027 19:41:02.843817  585556 system_pods.go:61] "kube-scheduler-no-preload-095885" [7ba1709a-c913-40f3-833b-bee63057ce6e] Running
	I1027 19:41:02.843822  585556 system_pods.go:61] "storage-provisioner" [e8283562-be98-444b-b591-a0239860e729] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 19:41:02.843829  585556 system_pods.go:74] duration metric: took 3.89196ms to wait for pod list to return data ...
	I1027 19:41:02.843841  585556 default_sa.go:34] waiting for default service account to be created ...
	I1027 19:41:02.846583  585556 default_sa.go:45] found service account: "default"
	I1027 19:41:02.846611  585556 default_sa.go:55] duration metric: took 2.763753ms for default service account to be created ...
	I1027 19:41:02.846622  585556 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 19:41:02.849879  585556 system_pods.go:86] 8 kube-system pods found
	I1027 19:41:02.849914  585556 system_pods.go:89] "coredns-66bc5c9577-gwqvg" [3bcd75c1-f42f-4252-b1fc-2bdab3c8373e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 19:41:02.849920  585556 system_pods.go:89] "etcd-no-preload-095885" [398272ac-d5cc-44d6-bf2a-3469d316b417] Running
	I1027 19:41:02.849926  585556 system_pods.go:89] "kindnet-8lbz5" [42b05fb3-87d3-412f-ac73-cb73a737aab1] Running
	I1027 19:41:02.849930  585556 system_pods.go:89] "kube-apiserver-no-preload-095885" [d609db88-4097-43b5-b881-a445344edf64] Running
	I1027 19:41:02.849935  585556 system_pods.go:89] "kube-controller-manager-no-preload-095885" [b1bfd486-ed1f-4f8b-a08b-de7739f1dd9c] Running
	I1027 19:41:02.849938  585556 system_pods.go:89] "kube-proxy-wz64m" [339cb07c-5319-4d8b-ab61-a6d377c2bc61] Running
	I1027 19:41:02.849942  585556 system_pods.go:89] "kube-scheduler-no-preload-095885" [7ba1709a-c913-40f3-833b-bee63057ce6e] Running
	I1027 19:41:02.849946  585556 system_pods.go:89] "storage-provisioner" [e8283562-be98-444b-b591-a0239860e729] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 19:41:02.849981  585556 retry.go:31] will retry after 208.530125ms: missing components: kube-dns
	I1027 19:41:03.063213  585556 system_pods.go:86] 8 kube-system pods found
	I1027 19:41:03.063246  585556 system_pods.go:89] "coredns-66bc5c9577-gwqvg" [3bcd75c1-f42f-4252-b1fc-2bdab3c8373e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 19:41:03.063252  585556 system_pods.go:89] "etcd-no-preload-095885" [398272ac-d5cc-44d6-bf2a-3469d316b417] Running
	I1027 19:41:03.063259  585556 system_pods.go:89] "kindnet-8lbz5" [42b05fb3-87d3-412f-ac73-cb73a737aab1] Running
	I1027 19:41:03.063269  585556 system_pods.go:89] "kube-apiserver-no-preload-095885" [d609db88-4097-43b5-b881-a445344edf64] Running
	I1027 19:41:03.063273  585556 system_pods.go:89] "kube-controller-manager-no-preload-095885" [b1bfd486-ed1f-4f8b-a08b-de7739f1dd9c] Running
	I1027 19:41:03.063277  585556 system_pods.go:89] "kube-proxy-wz64m" [339cb07c-5319-4d8b-ab61-a6d377c2bc61] Running
	I1027 19:41:03.063283  585556 system_pods.go:89] "kube-scheduler-no-preload-095885" [7ba1709a-c913-40f3-833b-bee63057ce6e] Running
	I1027 19:41:03.063290  585556 system_pods.go:89] "storage-provisioner" [e8283562-be98-444b-b591-a0239860e729] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 19:41:03.063312  585556 retry.go:31] will retry after 387.065987ms: missing components: kube-dns
	I1027 19:41:03.454191  585556 system_pods.go:86] 8 kube-system pods found
	I1027 19:41:03.454223  585556 system_pods.go:89] "coredns-66bc5c9577-gwqvg" [3bcd75c1-f42f-4252-b1fc-2bdab3c8373e] Running
	I1027 19:41:03.454229  585556 system_pods.go:89] "etcd-no-preload-095885" [398272ac-d5cc-44d6-bf2a-3469d316b417] Running
	I1027 19:41:03.454233  585556 system_pods.go:89] "kindnet-8lbz5" [42b05fb3-87d3-412f-ac73-cb73a737aab1] Running
	I1027 19:41:03.454236  585556 system_pods.go:89] "kube-apiserver-no-preload-095885" [d609db88-4097-43b5-b881-a445344edf64] Running
	I1027 19:41:03.454241  585556 system_pods.go:89] "kube-controller-manager-no-preload-095885" [b1bfd486-ed1f-4f8b-a08b-de7739f1dd9c] Running
	I1027 19:41:03.454244  585556 system_pods.go:89] "kube-proxy-wz64m" [339cb07c-5319-4d8b-ab61-a6d377c2bc61] Running
	I1027 19:41:03.454248  585556 system_pods.go:89] "kube-scheduler-no-preload-095885" [7ba1709a-c913-40f3-833b-bee63057ce6e] Running
	I1027 19:41:03.454251  585556 system_pods.go:89] "storage-provisioner" [e8283562-be98-444b-b591-a0239860e729] Running
	I1027 19:41:03.454261  585556 system_pods.go:126] duration metric: took 607.631414ms to wait for k8s-apps to be running ...
	I1027 19:41:03.454271  585556 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 19:41:03.454342  585556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:41:03.469661  585556 system_svc.go:56] duration metric: took 15.375165ms WaitForService to wait for kubelet
	I1027 19:41:03.469692  585556 kubeadm.go:586] duration metric: took 13.995993942s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 19:41:03.469713  585556 node_conditions.go:102] verifying NodePressure condition ...
	I1027 19:41:03.473051  585556 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1027 19:41:03.473084  585556 node_conditions.go:123] node cpu capacity is 8
	I1027 19:41:03.473098  585556 node_conditions.go:105] duration metric: took 3.378892ms to run NodePressure ...
	I1027 19:41:03.473110  585556 start.go:241] waiting for startup goroutines ...
	I1027 19:41:03.473116  585556 start.go:246] waiting for cluster config update ...
	I1027 19:41:03.473127  585556 start.go:255] writing updated cluster config ...
	I1027 19:41:03.473547  585556 ssh_runner.go:195] Run: rm -f paused
	I1027 19:41:03.478479  585556 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 19:41:03.482432  585556 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gwqvg" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:03.487649  585556 pod_ready.go:94] pod "coredns-66bc5c9577-gwqvg" is "Ready"
	I1027 19:41:03.487680  585556 pod_ready.go:86] duration metric: took 5.219183ms for pod "coredns-66bc5c9577-gwqvg" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:03.489989  585556 pod_ready.go:83] waiting for pod "etcd-no-preload-095885" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:03.494299  585556 pod_ready.go:94] pod "etcd-no-preload-095885" is "Ready"
	I1027 19:41:03.494327  585556 pod_ready.go:86] duration metric: took 4.312641ms for pod "etcd-no-preload-095885" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:03.496451  585556 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-095885" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:03.500973  585556 pod_ready.go:94] pod "kube-apiserver-no-preload-095885" is "Ready"
	I1027 19:41:03.501001  585556 pod_ready.go:86] duration metric: took 4.521998ms for pod "kube-apiserver-no-preload-095885" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:03.503226  585556 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-095885" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:03.883037  585556 pod_ready.go:94] pod "kube-controller-manager-no-preload-095885" is "Ready"
	I1027 19:41:03.883068  585556 pod_ready.go:86] duration metric: took 379.813717ms for pod "kube-controller-manager-no-preload-095885" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:04.083654  585556 pod_ready.go:83] waiting for pod "kube-proxy-wz64m" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:04.482474  585556 pod_ready.go:94] pod "kube-proxy-wz64m" is "Ready"
	I1027 19:41:04.482513  585556 pod_ready.go:86] duration metric: took 398.821516ms for pod "kube-proxy-wz64m" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:04.682931  585556 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-095885" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:05.082246  585556 pod_ready.go:94] pod "kube-scheduler-no-preload-095885" is "Ready"
	I1027 19:41:05.082304  585556 pod_ready.go:86] duration metric: took 399.325532ms for pod "kube-scheduler-no-preload-095885" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:05.082322  585556 pod_ready.go:40] duration metric: took 1.603803236s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 19:41:05.130054  585556 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 19:41:05.132095  585556 out.go:179] * Done! kubectl is now configured to use "no-preload-095885" cluster and "default" namespace by default
	I1027 19:41:01.066520  594803 out.go:252] * Restarting existing docker container for "embed-certs-919237" ...
	I1027 19:41:01.066614  594803 cli_runner.go:164] Run: docker start embed-certs-919237
	I1027 19:41:01.345192  594803 cli_runner.go:164] Run: docker container inspect embed-certs-919237 --format={{.State.Status}}
	I1027 19:41:01.367723  594803 kic.go:430] container "embed-certs-919237" state is running.
	I1027 19:41:01.368113  594803 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-919237
	I1027 19:41:01.390202  594803 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/embed-certs-919237/config.json ...
	I1027 19:41:01.390514  594803 machine.go:93] provisionDockerMachine start ...
	I1027 19:41:01.390591  594803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-919237
	I1027 19:41:01.413027  594803 main.go:141] libmachine: Using SSH client type: native
	I1027 19:41:01.413398  594803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33445 <nil> <nil>}
	I1027 19:41:01.413418  594803 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 19:41:01.414196  594803 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47452->127.0.0.1:33445: read: connection reset by peer
	I1027 19:41:04.563874  594803 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-919237
	
	I1027 19:41:04.563910  594803 ubuntu.go:182] provisioning hostname "embed-certs-919237"
	I1027 19:41:04.563984  594803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-919237
	I1027 19:41:04.585857  594803 main.go:141] libmachine: Using SSH client type: native
	I1027 19:41:04.586108  594803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33445 <nil> <nil>}
	I1027 19:41:04.586127  594803 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-919237 && echo "embed-certs-919237" | sudo tee /etc/hostname
	I1027 19:41:04.745340  594803 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-919237
	
	I1027 19:41:04.745465  594803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-919237
	I1027 19:41:04.769321  594803 main.go:141] libmachine: Using SSH client type: native
	I1027 19:41:04.769548  594803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33445 <nil> <nil>}
	I1027 19:41:04.769566  594803 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-919237' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-919237/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-919237' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 19:41:04.920012  594803 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 19:41:04.920046  594803 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21801-352833/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-352833/.minikube}
	I1027 19:41:04.920074  594803 ubuntu.go:190] setting up certificates
	I1027 19:41:04.920094  594803 provision.go:84] configureAuth start
	I1027 19:41:04.920183  594803 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-919237
	I1027 19:41:04.943841  594803 provision.go:143] copyHostCerts
	I1027 19:41:04.943927  594803 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem, removing ...
	I1027 19:41:04.943948  594803 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem
	I1027 19:41:04.944028  594803 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem (1123 bytes)
	I1027 19:41:04.944239  594803 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem, removing ...
	I1027 19:41:04.944257  594803 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem
	I1027 19:41:04.944296  594803 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem (1679 bytes)
	I1027 19:41:04.944383  594803 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem, removing ...
	I1027 19:41:04.944395  594803 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem
	I1027 19:41:04.944423  594803 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem (1078 bytes)
	I1027 19:41:04.944475  594803 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem org=jenkins.embed-certs-919237 san=[127.0.0.1 192.168.94.2 embed-certs-919237 localhost minikube]
	I1027 19:41:05.155892  594803 provision.go:177] copyRemoteCerts
	I1027 19:41:05.155953  594803 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 19:41:05.156001  594803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-919237
	I1027 19:41:05.177871  594803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/embed-certs-919237/id_rsa Username:docker}
	I1027 19:41:05.283397  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 19:41:05.303860  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1027 19:41:05.323928  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 19:41:05.343816  594803 provision.go:87] duration metric: took 423.704232ms to configureAuth
	I1027 19:41:05.343849  594803 ubuntu.go:206] setting minikube options for container-runtime
	I1027 19:41:05.344062  594803 config.go:182] Loaded profile config "embed-certs-919237": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:41:05.344270  594803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-919237
	I1027 19:41:05.364828  594803 main.go:141] libmachine: Using SSH client type: native
	I1027 19:41:05.365067  594803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33445 <nil> <nil>}
	I1027 19:41:05.365089  594803 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 19:41:05.683089  594803 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 19:41:05.683117  594803 machine.go:96] duration metric: took 4.292583564s to provisionDockerMachine
	I1027 19:41:05.683160  594803 start.go:293] postStartSetup for "embed-certs-919237" (driver="docker")
	I1027 19:41:05.683178  594803 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 19:41:05.683251  594803 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 19:41:05.683341  594803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-919237
	I1027 19:41:05.704409  594803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/embed-certs-919237/id_rsa Username:docker}
	I1027 19:41:05.808620  594803 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 19:41:05.812844  594803 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 19:41:05.812879  594803 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 19:41:05.812891  594803 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-352833/.minikube/addons for local assets ...
	I1027 19:41:05.812957  594803 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-352833/.minikube/files for local assets ...
	I1027 19:41:05.813078  594803 filesync.go:149] local asset: /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem -> 3564152.pem in /etc/ssl/certs
	I1027 19:41:05.813222  594803 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 19:41:01.544316  565798 logs.go:123] Gathering logs for kubelet ...
	I1027 19:41:01.544346  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:41:01.659317  565798 logs.go:123] Gathering logs for dmesg ...
	I1027 19:41:01.659359  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:41:01.686121  565798 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:41:01.686169  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:41:01.747842  565798 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:41:01.747864  565798 logs.go:123] Gathering logs for kube-apiserver [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8] ...
	I1027 19:41:01.747878  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:41:01.793564  565798 logs.go:123] Gathering logs for kube-scheduler [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8] ...
	I1027 19:41:01.793605  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:41:01.845488  565798 logs.go:123] Gathering logs for kube-controller-manager [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77] ...
	I1027 19:41:01.845527  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:04.376444  565798 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1027 19:41:04.376990  565798 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1027 19:41:04.377046  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:41:04.377099  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:41:04.406829  565798 cri.go:89] found id: "f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:41:04.406851  565798 cri.go:89] found id: ""
	I1027 19:41:04.406859  565798 logs.go:282] 1 containers: [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8]
	I1027 19:41:04.406918  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:04.411348  565798 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:41:04.411426  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:41:04.443060  565798 cri.go:89] found id: ""
	I1027 19:41:04.443094  565798 logs.go:282] 0 containers: []
	W1027 19:41:04.443105  565798 logs.go:284] No container was found matching "etcd"
	I1027 19:41:04.443113  565798 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:41:04.443223  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:41:04.475252  565798 cri.go:89] found id: ""
	I1027 19:41:04.475280  565798 logs.go:282] 0 containers: []
	W1027 19:41:04.475288  565798 logs.go:284] No container was found matching "coredns"
	I1027 19:41:04.475295  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:41:04.475358  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:41:04.506592  565798 cri.go:89] found id: "15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:41:04.506613  565798 cri.go:89] found id: ""
	I1027 19:41:04.506622  565798 logs.go:282] 1 containers: [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8]
	I1027 19:41:04.506674  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:04.511168  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:41:04.511243  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:41:04.541392  565798 cri.go:89] found id: ""
	I1027 19:41:04.541418  565798 logs.go:282] 0 containers: []
	W1027 19:41:04.541425  565798 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:41:04.541432  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:41:04.541484  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:41:04.572329  565798 cri.go:89] found id: "38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:04.572361  565798 cri.go:89] found id: ""
	I1027 19:41:04.572370  565798 logs.go:282] 1 containers: [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77]
	I1027 19:41:04.572429  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:04.577195  565798 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:41:04.577270  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:41:04.608128  565798 cri.go:89] found id: ""
	I1027 19:41:04.608182  565798 logs.go:282] 0 containers: []
	W1027 19:41:04.608192  565798 logs.go:284] No container was found matching "kindnet"
	I1027 19:41:04.608199  565798 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:41:04.608266  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:41:04.638970  565798 cri.go:89] found id: ""
	I1027 19:41:04.639004  565798 logs.go:282] 0 containers: []
	W1027 19:41:04.639017  565798 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:41:04.639029  565798 logs.go:123] Gathering logs for kube-apiserver [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8] ...
	I1027 19:41:04.639047  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:41:04.676026  565798 logs.go:123] Gathering logs for kube-scheduler [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8] ...
	I1027 19:41:04.676066  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:41:04.729477  565798 logs.go:123] Gathering logs for kube-controller-manager [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77] ...
	I1027 19:41:04.729522  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:04.763334  565798 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:41:04.763366  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:41:04.814559  565798 logs.go:123] Gathering logs for container status ...
	I1027 19:41:04.814597  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:41:04.850968  565798 logs.go:123] Gathering logs for kubelet ...
	I1027 19:41:04.851011  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:41:04.944394  565798 logs.go:123] Gathering logs for dmesg ...
	I1027 19:41:04.944431  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:41:04.966811  565798 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:41:04.966851  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:41:05.028358  565798 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:41:05.821887  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem --> /etc/ssl/certs/3564152.pem (1708 bytes)
	I1027 19:41:05.841205  594803 start.go:296] duration metric: took 158.022167ms for postStartSetup
	I1027 19:41:05.841329  594803 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 19:41:05.841428  594803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-919237
	I1027 19:41:05.862221  594803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/embed-certs-919237/id_rsa Username:docker}
	I1027 19:41:05.962951  594803 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 19:41:05.968053  594803 fix.go:56] duration metric: took 4.924088468s for fixHost
	I1027 19:41:05.968084  594803 start.go:83] releasing machines lock for "embed-certs-919237", held for 4.924145002s
	I1027 19:41:05.968196  594803 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-919237
	I1027 19:41:05.987613  594803 ssh_runner.go:195] Run: cat /version.json
	I1027 19:41:05.987669  594803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-919237
	I1027 19:41:05.987702  594803 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 19:41:05.987789  594803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-919237
	I1027 19:41:06.007445  594803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/embed-certs-919237/id_rsa Username:docker}
	I1027 19:41:06.008274  594803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/embed-certs-919237/id_rsa Username:docker}
	I1027 19:41:06.171092  594803 ssh_runner.go:195] Run: systemctl --version
	I1027 19:41:06.179869  594803 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 19:41:06.219933  594803 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 19:41:06.225954  594803 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 19:41:06.226044  594803 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 19:41:06.236901  594803 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 19:41:06.236933  594803 start.go:495] detecting cgroup driver to use...
	I1027 19:41:06.236974  594803 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 19:41:06.237038  594803 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 19:41:06.256171  594803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 19:41:06.272267  594803 docker.go:218] disabling cri-docker service (if available) ...
	I1027 19:41:06.272335  594803 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 19:41:06.289493  594803 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 19:41:06.303711  594803 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 19:41:06.395451  594803 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 19:41:06.478021  594803 docker.go:234] disabling docker service ...
	I1027 19:41:06.478097  594803 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 19:41:06.493521  594803 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 19:41:06.507490  594803 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 19:41:06.591513  594803 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 19:41:06.682906  594803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 19:41:06.696885  594803 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 19:41:06.713250  594803 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 19:41:06.713378  594803 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:41:06.723697  594803 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 19:41:06.723794  594803 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:41:06.734257  594803 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:41:06.744505  594803 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:41:06.754791  594803 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 19:41:06.764454  594803 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:41:06.774849  594803 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:41:06.784515  594803 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:41:06.794832  594803 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 19:41:06.803521  594803 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 19:41:06.812405  594803 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:41:06.901080  594803 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 19:41:07.023003  594803 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 19:41:07.023077  594803 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 19:41:07.027729  594803 start.go:563] Will wait 60s for crictl version
	I1027 19:41:07.027821  594803 ssh_runner.go:195] Run: which crictl
	I1027 19:41:07.032087  594803 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 19:41:07.060453  594803 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 19:41:07.060549  594803 ssh_runner.go:195] Run: crio --version
	I1027 19:41:07.090930  594803 ssh_runner.go:195] Run: crio --version
	I1027 19:41:07.122696  594803 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 19:41:07.124057  594803 cli_runner.go:164] Run: docker network inspect embed-certs-919237 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 19:41:07.144121  594803 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1027 19:41:07.148817  594803 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 19:41:07.160514  594803 kubeadm.go:883] updating cluster {Name:embed-certs-919237 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-919237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 19:41:07.160677  594803 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:41:07.160758  594803 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 19:41:07.197268  594803 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 19:41:07.197294  594803 crio.go:433] Images already preloaded, skipping extraction
	I1027 19:41:07.197359  594803 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 19:41:07.224730  594803 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 19:41:07.224756  594803 cache_images.go:85] Images are preloaded, skipping loading
	I1027 19:41:07.224766  594803 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1027 19:41:07.224884  594803 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-919237 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-919237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 19:41:07.224966  594803 ssh_runner.go:195] Run: crio config
	I1027 19:41:07.273364  594803 cni.go:84] Creating CNI manager for ""
	I1027 19:41:07.273386  594803 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:41:07.273406  594803 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 19:41:07.273446  594803 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-919237 NodeName:embed-certs-919237 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 19:41:07.273615  594803 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-919237"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 19:41:07.273713  594803 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 19:41:07.283551  594803 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 19:41:07.283671  594803 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 19:41:07.292711  594803 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1027 19:41:07.307484  594803 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 19:41:07.321800  594803 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1027 19:41:07.335251  594803 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1027 19:41:07.339362  594803 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 19:41:07.350244  594803 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:41:07.434349  594803 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:41:07.464970  594803 certs.go:69] Setting up /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/embed-certs-919237 for IP: 192.168.94.2
	I1027 19:41:07.464995  594803 certs.go:195] generating shared ca certs ...
	I1027 19:41:07.465020  594803 certs.go:227] acquiring lock for ca certs: {Name:mk4bdbca32068f6f817fc35fdc496e961dc3e0d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:41:07.465244  594803 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21801-352833/.minikube/ca.key
	I1027 19:41:07.465292  594803 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.key
	I1027 19:41:07.465304  594803 certs.go:257] generating profile certs ...
	I1027 19:41:07.465403  594803 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/embed-certs-919237/client.key
	I1027 19:41:07.465450  594803 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/embed-certs-919237/apiserver.key.3faa2aa5
	I1027 19:41:07.465488  594803 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/embed-certs-919237/proxy-client.key
	I1027 19:41:07.465591  594803 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/356415.pem (1338 bytes)
	W1027 19:41:07.465626  594803 certs.go:480] ignoring /home/jenkins/minikube-integration/21801-352833/.minikube/certs/356415_empty.pem, impossibly tiny 0 bytes
	I1027 19:41:07.465636  594803 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 19:41:07.465656  594803 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem (1078 bytes)
	I1027 19:41:07.465680  594803 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem (1123 bytes)
	I1027 19:41:07.465706  594803 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem (1679 bytes)
	I1027 19:41:07.465755  594803 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem (1708 bytes)
	I1027 19:41:07.466444  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 19:41:07.487514  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 19:41:07.509307  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 19:41:07.532458  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1027 19:41:07.564071  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/embed-certs-919237/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1027 19:41:07.586349  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/embed-certs-919237/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 19:41:07.606465  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/embed-certs-919237/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 19:41:07.627059  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/embed-certs-919237/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 19:41:07.648181  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 19:41:07.672545  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/certs/356415.pem --> /usr/share/ca-certificates/356415.pem (1338 bytes)
	I1027 19:41:07.693483  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem --> /usr/share/ca-certificates/3564152.pem (1708 bytes)
	I1027 19:41:07.715889  594803 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 19:41:07.732429  594803 ssh_runner.go:195] Run: openssl version
	I1027 19:41:07.740863  594803 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/356415.pem && ln -fs /usr/share/ca-certificates/356415.pem /etc/ssl/certs/356415.pem"
	I1027 19:41:07.751652  594803 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356415.pem
	I1027 19:41:07.756427  594803 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 19:02 /usr/share/ca-certificates/356415.pem
	I1027 19:41:07.756508  594803 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356415.pem
	I1027 19:41:07.796822  594803 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/356415.pem /etc/ssl/certs/51391683.0"
	I1027 19:41:07.807165  594803 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3564152.pem && ln -fs /usr/share/ca-certificates/3564152.pem /etc/ssl/certs/3564152.pem"
	I1027 19:41:07.817111  594803 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3564152.pem
	I1027 19:41:07.821699  594803 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 19:02 /usr/share/ca-certificates/3564152.pem
	I1027 19:41:07.821774  594803 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3564152.pem
	I1027 19:41:07.862104  594803 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3564152.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 19:41:07.872082  594803 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 19:41:07.882661  594803 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:41:07.888248  594803 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:41:07.888325  594803 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:41:07.927092  594803 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 19:41:07.936711  594803 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 19:41:07.941329  594803 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 19:41:07.982744  594803 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 19:41:08.036882  594803 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 19:41:08.086334  594803 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 19:41:08.146052  594803 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 19:41:08.191698  594803 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 19:41:08.228527  594803 kubeadm.go:400] StartCluster: {Name:embed-certs-919237 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-919237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:41:08.228643  594803 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 19:41:08.228710  594803 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 19:41:08.261293  594803 cri.go:89] found id: "d5a5c65a74b4b0bac782941ddf5cfc5e1c95eb29dbc563a89bc74143a3d75be8"
	I1027 19:41:08.261319  594803 cri.go:89] found id: "f0dcb6f33c4a16c8aabf1c9522c219dfe57ce0438d6eedb8d11b3bbed06bf220"
	I1027 19:41:08.261324  594803 cri.go:89] found id: "d17bd312e4c2b6e68ce5e1c0006ad10d3d74b77c3bc3e8570e4526763c6914a9"
	I1027 19:41:08.261327  594803 cri.go:89] found id: "31682e1eceede1979fd31aa2e96a71541d29f7d036de012b0c0a406025482670"
	I1027 19:41:08.261344  594803 cri.go:89] found id: ""
	I1027 19:41:08.261398  594803 ssh_runner.go:195] Run: sudo runc list -f json
	W1027 19:41:08.275475  594803 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:41:08Z" level=error msg="open /run/runc: no such file or directory"
	I1027 19:41:08.275556  594803 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 19:41:08.285008  594803 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1027 19:41:08.285028  594803 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1027 19:41:08.285080  594803 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 19:41:08.292877  594803 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 19:41:08.293734  594803 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-919237" does not appear in /home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:41:08.294188  594803 kubeconfig.go:62] /home/jenkins/minikube-integration/21801-352833/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-919237" cluster setting kubeconfig missing "embed-certs-919237" context setting]
	I1027 19:41:08.294867  594803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/kubeconfig: {Name:mk24cbe512a6907c874f3fb7a85390a8f9fd2b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:41:08.296560  594803 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 19:41:08.304858  594803 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.94.2
	I1027 19:41:08.304893  594803 kubeadm.go:601] duration metric: took 19.857495ms to restartPrimaryControlPlane
	I1027 19:41:08.304904  594803 kubeadm.go:402] duration metric: took 76.392154ms to StartCluster
	I1027 19:41:08.304921  594803 settings.go:142] acquiring lock: {Name:mk8304c2106bf310642e0949fc0266ccb50f2f95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:41:08.304992  594803 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:41:08.306608  594803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/kubeconfig: {Name:mk24cbe512a6907c874f3fb7a85390a8f9fd2b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:41:08.306895  594803 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 19:41:08.306966  594803 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 19:41:08.307088  594803 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-919237"
	I1027 19:41:08.307112  594803 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-919237"
	W1027 19:41:08.307120  594803 addons.go:247] addon storage-provisioner should already be in state true
	I1027 19:41:08.307121  594803 addons.go:69] Setting dashboard=true in profile "embed-certs-919237"
	I1027 19:41:08.307180  594803 config.go:182] Loaded profile config "embed-certs-919237": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:41:08.307172  594803 addons.go:69] Setting default-storageclass=true in profile "embed-certs-919237"
	I1027 19:41:08.307206  594803 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-919237"
	I1027 19:41:08.307185  594803 host.go:66] Checking if "embed-certs-919237" exists ...
	I1027 19:41:08.307188  594803 addons.go:238] Setting addon dashboard=true in "embed-certs-919237"
	W1027 19:41:08.307376  594803 addons.go:247] addon dashboard should already be in state true
	I1027 19:41:08.307407  594803 host.go:66] Checking if "embed-certs-919237" exists ...
	I1027 19:41:08.307583  594803 cli_runner.go:164] Run: docker container inspect embed-certs-919237 --format={{.State.Status}}
	I1027 19:41:08.307745  594803 cli_runner.go:164] Run: docker container inspect embed-certs-919237 --format={{.State.Status}}
	I1027 19:41:08.307873  594803 cli_runner.go:164] Run: docker container inspect embed-certs-919237 --format={{.State.Status}}
	I1027 19:41:08.309349  594803 out.go:179] * Verifying Kubernetes components...
	I1027 19:41:08.310781  594803 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:41:08.336188  594803 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1027 19:41:08.336216  594803 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:41:08.336832  594803 addons.go:238] Setting addon default-storageclass=true in "embed-certs-919237"
	W1027 19:41:08.336855  594803 addons.go:247] addon default-storageclass should already be in state true
	I1027 19:41:08.336886  594803 host.go:66] Checking if "embed-certs-919237" exists ...
	I1027 19:41:08.337405  594803 cli_runner.go:164] Run: docker container inspect embed-certs-919237 --format={{.State.Status}}
	I1027 19:41:08.337895  594803 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:41:08.337913  594803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 19:41:08.337970  594803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-919237
	I1027 19:41:08.339243  594803 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1027 19:41:08.340863  594803 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1027 19:41:08.340892  594803 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1027 19:41:08.340959  594803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-919237
	I1027 19:41:08.371713  594803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/embed-certs-919237/id_rsa Username:docker}
	I1027 19:41:08.378869  594803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/embed-certs-919237/id_rsa Username:docker}
	I1027 19:41:08.379420  594803 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 19:41:08.379443  594803 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 19:41:08.379523  594803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-919237
	I1027 19:41:08.404654  594803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/embed-certs-919237/id_rsa Username:docker}
	I1027 19:41:08.459858  594803 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:41:08.474523  594803 node_ready.go:35] waiting up to 6m0s for node "embed-certs-919237" to be "Ready" ...
	I1027 19:41:08.494692  594803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:41:08.501377  594803 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1027 19:41:08.501402  594803 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1027 19:41:08.517164  594803 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1027 19:41:08.517189  594803 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1027 19:41:08.528162  594803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 19:41:08.536218  594803 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1027 19:41:08.536248  594803 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1027 19:41:08.555432  594803 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1027 19:41:08.555459  594803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1027 19:41:08.577695  594803 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1027 19:41:08.577726  594803 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1027 19:41:08.596623  594803 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1027 19:41:08.596657  594803 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1027 19:41:08.612731  594803 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1027 19:41:08.612763  594803 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1027 19:41:08.627030  594803 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1027 19:41:08.627060  594803 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1027 19:41:08.641348  594803 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 19:41:08.641379  594803 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1027 19:41:08.656654  594803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 19:41:09.985803  594803 node_ready.go:49] node "embed-certs-919237" is "Ready"
	I1027 19:41:09.985838  594803 node_ready.go:38] duration metric: took 1.511271197s for node "embed-certs-919237" to be "Ready" ...
	I1027 19:41:09.985856  594803 api_server.go:52] waiting for apiserver process to appear ...
	I1027 19:41:09.985916  594803 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:41:10.512525  594803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.017790889s)
	I1027 19:41:10.512570  594803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.984382968s)
	I1027 19:41:10.512737  594803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.856029763s)
	I1027 19:41:10.512758  594803 api_server.go:72] duration metric: took 2.205827226s to wait for apiserver process to appear ...
	I1027 19:41:10.512770  594803 api_server.go:88] waiting for apiserver healthz status ...
	I1027 19:41:10.512790  594803 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1027 19:41:10.514667  594803 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-919237 addons enable metrics-server
	
	I1027 19:41:10.519068  594803 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 19:41:10.519098  594803 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 19:41:10.525420  594803 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1027 19:41:10.526779  594803 addons.go:514] duration metric: took 2.219821783s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1027 19:41:07.528527  565798 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1027 19:41:07.529038  565798 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1027 19:41:07.529097  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:41:07.529167  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:41:07.570906  565798 cri.go:89] found id: "f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:41:07.570937  565798 cri.go:89] found id: ""
	I1027 19:41:07.570949  565798 logs.go:282] 1 containers: [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8]
	I1027 19:41:07.571019  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:07.575599  565798 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:41:07.575660  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:41:07.605990  565798 cri.go:89] found id: ""
	I1027 19:41:07.606014  565798 logs.go:282] 0 containers: []
	W1027 19:41:07.606023  565798 logs.go:284] No container was found matching "etcd"
	I1027 19:41:07.606028  565798 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:41:07.606087  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:41:07.638584  565798 cri.go:89] found id: ""
	I1027 19:41:07.638610  565798 logs.go:282] 0 containers: []
	W1027 19:41:07.638619  565798 logs.go:284] No container was found matching "coredns"
	I1027 19:41:07.638626  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:41:07.638673  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:41:07.670909  565798 cri.go:89] found id: "15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:41:07.670935  565798 cri.go:89] found id: ""
	I1027 19:41:07.670946  565798 logs.go:282] 1 containers: [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8]
	I1027 19:41:07.671012  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:07.676493  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:41:07.676572  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:41:07.707704  565798 cri.go:89] found id: ""
	I1027 19:41:07.707730  565798 logs.go:282] 0 containers: []
	W1027 19:41:07.707738  565798 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:41:07.707744  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:41:07.707804  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:41:07.738631  565798 cri.go:89] found id: "38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:07.738651  565798 cri.go:89] found id: ""
	I1027 19:41:07.738663  565798 logs.go:282] 1 containers: [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77]
	I1027 19:41:07.738722  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:07.743367  565798 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:41:07.743451  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:41:07.775208  565798 cri.go:89] found id: ""
	I1027 19:41:07.775238  565798 logs.go:282] 0 containers: []
	W1027 19:41:07.775252  565798 logs.go:284] No container was found matching "kindnet"
	I1027 19:41:07.775261  565798 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:41:07.775339  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:41:07.805721  565798 cri.go:89] found id: ""
	I1027 19:41:07.805749  565798 logs.go:282] 0 containers: []
	W1027 19:41:07.805759  565798 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:41:07.805773  565798 logs.go:123] Gathering logs for dmesg ...
	I1027 19:41:07.805797  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:41:07.829611  565798 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:41:07.829647  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:41:07.894281  565798 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:41:07.894316  565798 logs.go:123] Gathering logs for kube-apiserver [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8] ...
	I1027 19:41:07.894338  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:41:07.930602  565798 logs.go:123] Gathering logs for kube-scheduler [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8] ...
	I1027 19:41:07.930636  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:41:07.985189  565798 logs.go:123] Gathering logs for kube-controller-manager [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77] ...
	I1027 19:41:07.985226  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:08.023545  565798 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:41:08.023578  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:41:08.093343  565798 logs.go:123] Gathering logs for container status ...
	I1027 19:41:08.093385  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:41:08.145553  565798 logs.go:123] Gathering logs for kubelet ...
	I1027 19:41:08.145592  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:41:10.748218  565798 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1027 19:41:10.748717  565798 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1027 19:41:10.748775  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:41:10.748830  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:41:10.778542  565798 cri.go:89] found id: "f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:41:10.778563  565798 cri.go:89] found id: ""
	I1027 19:41:10.778572  565798 logs.go:282] 1 containers: [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8]
	I1027 19:41:10.778626  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:10.782948  565798 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:41:10.783005  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:41:10.810590  565798 cri.go:89] found id: ""
	I1027 19:41:10.810619  565798 logs.go:282] 0 containers: []
	W1027 19:41:10.810631  565798 logs.go:284] No container was found matching "etcd"
	I1027 19:41:10.810642  565798 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:41:10.810705  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:41:10.841630  565798 cri.go:89] found id: ""
	I1027 19:41:10.841659  565798 logs.go:282] 0 containers: []
	W1027 19:41:10.841670  565798 logs.go:284] No container was found matching "coredns"
	I1027 19:41:10.841678  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:41:10.841747  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:41:10.881274  565798 cri.go:89] found id: "15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:41:10.881300  565798 cri.go:89] found id: ""
	I1027 19:41:10.881311  565798 logs.go:282] 1 containers: [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8]
	I1027 19:41:10.881370  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:10.886646  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:41:10.886736  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:41:10.929911  565798 cri.go:89] found id: ""
	I1027 19:41:10.929943  565798 logs.go:282] 0 containers: []
	W1027 19:41:10.929954  565798 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:41:10.929962  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:41:10.930024  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:41:10.968851  565798 cri.go:89] found id: "38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:10.968878  565798 cri.go:89] found id: ""
	I1027 19:41:10.968888  565798 logs.go:282] 1 containers: [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77]
	I1027 19:41:10.968948  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:10.974365  565798 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:41:10.974432  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:41:11.004971  565798 cri.go:89] found id: ""
	I1027 19:41:11.004997  565798 logs.go:282] 0 containers: []
	W1027 19:41:11.005005  565798 logs.go:284] No container was found matching "kindnet"
	I1027 19:41:11.005011  565798 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:41:11.005072  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:41:11.036769  565798 cri.go:89] found id: ""
	I1027 19:41:11.036802  565798 logs.go:282] 0 containers: []
	W1027 19:41:11.036814  565798 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:41:11.036827  565798 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:41:11.036845  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:41:11.109616  565798 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:41:11.109640  565798 logs.go:123] Gathering logs for kube-apiserver [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8] ...
	I1027 19:41:11.109659  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:41:11.149761  565798 logs.go:123] Gathering logs for kube-scheduler [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8] ...
	I1027 19:41:11.149808  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:41:11.209309  565798 logs.go:123] Gathering logs for kube-controller-manager [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77] ...
	I1027 19:41:11.209355  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:11.238293  565798 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:41:11.238330  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:41:11.290773  565798 logs.go:123] Gathering logs for container status ...
	I1027 19:41:11.290819  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:41:11.324791  565798 logs.go:123] Gathering logs for kubelet ...
	I1027 19:41:11.324821  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:41:11.416408  565798 logs.go:123] Gathering logs for dmesg ...
	I1027 19:41:11.416449  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:41:11.013509  594803 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1027 19:41:11.018609  594803 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 19:41:11.018645  594803 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 19:41:11.512960  594803 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1027 19:41:11.519407  594803 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1027 19:41:11.520525  594803 api_server.go:141] control plane version: v1.34.1
	I1027 19:41:11.520554  594803 api_server.go:131] duration metric: took 1.007776585s to wait for apiserver health ...
	I1027 19:41:11.520563  594803 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 19:41:11.524555  594803 system_pods.go:59] 8 kube-system pods found
	I1027 19:41:11.524601  594803 system_pods.go:61] "coredns-66bc5c9577-9b9tz" [1f7cb1a7-6c91-4e4d-aecc-baaaa8f9bf22] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 19:41:11.524611  594803 system_pods.go:61] "etcd-embed-certs-919237" [b995a0ef-722f-4183-aefb-e86d11f084b1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 19:41:11.524622  594803 system_pods.go:61] "kindnet-6jx4q" [f346911c-5e04-4721-b4d8-c330f1629136] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1027 19:41:11.524630  594803 system_pods.go:61] "kube-apiserver-embed-certs-919237" [3a7050fe-4cb1-4d64-ad98-6cccb2f1581b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 19:41:11.524641  594803 system_pods.go:61] "kube-controller-manager-embed-certs-919237" [0a466515-69f1-4023-b8ea-dac3554f8746] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 19:41:11.524653  594803 system_pods.go:61] "kube-proxy-rrq2h" [afd63d93-c691-44d9-aa8e-73e522ea9369] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1027 19:41:11.524669  594803 system_pods.go:61] "kube-scheduler-embed-certs-919237" [c89fed17-fc68-4bc6-8cfd-9a213ca6a68c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 19:41:11.524678  594803 system_pods.go:61] "storage-provisioner" [a73b7a4c-44bb-443e-af42-78c83e6b6852] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 19:41:11.524688  594803 system_pods.go:74] duration metric: took 4.118519ms to wait for pod list to return data ...
	I1027 19:41:11.524701  594803 default_sa.go:34] waiting for default service account to be created ...
	I1027 19:41:11.527467  594803 default_sa.go:45] found service account: "default"
	I1027 19:41:11.527499  594803 default_sa.go:55] duration metric: took 2.788723ms for default service account to be created ...
	I1027 19:41:11.527512  594803 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 19:41:11.530476  594803 system_pods.go:86] 8 kube-system pods found
	I1027 19:41:11.530506  594803 system_pods.go:89] "coredns-66bc5c9577-9b9tz" [1f7cb1a7-6c91-4e4d-aecc-baaaa8f9bf22] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 19:41:11.530514  594803 system_pods.go:89] "etcd-embed-certs-919237" [b995a0ef-722f-4183-aefb-e86d11f084b1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 19:41:11.530523  594803 system_pods.go:89] "kindnet-6jx4q" [f346911c-5e04-4721-b4d8-c330f1629136] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1027 19:41:11.530529  594803 system_pods.go:89] "kube-apiserver-embed-certs-919237" [3a7050fe-4cb1-4d64-ad98-6cccb2f1581b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 19:41:11.530534  594803 system_pods.go:89] "kube-controller-manager-embed-certs-919237" [0a466515-69f1-4023-b8ea-dac3554f8746] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 19:41:11.530542  594803 system_pods.go:89] "kube-proxy-rrq2h" [afd63d93-c691-44d9-aa8e-73e522ea9369] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1027 19:41:11.530548  594803 system_pods.go:89] "kube-scheduler-embed-certs-919237" [c89fed17-fc68-4bc6-8cfd-9a213ca6a68c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 19:41:11.530554  594803 system_pods.go:89] "storage-provisioner" [a73b7a4c-44bb-443e-af42-78c83e6b6852] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 19:41:11.530563  594803 system_pods.go:126] duration metric: took 3.044674ms to wait for k8s-apps to be running ...
	I1027 19:41:11.530571  594803 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 19:41:11.530625  594803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:41:11.544749  594803 system_svc.go:56] duration metric: took 14.160213ms WaitForService to wait for kubelet
	I1027 19:41:11.544787  594803 kubeadm.go:586] duration metric: took 3.237859295s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 19:41:11.544807  594803 node_conditions.go:102] verifying NodePressure condition ...
	I1027 19:41:11.547989  594803 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1027 19:41:11.548021  594803 node_conditions.go:123] node cpu capacity is 8
	I1027 19:41:11.548039  594803 node_conditions.go:105] duration metric: took 3.227196ms to run NodePressure ...
	I1027 19:41:11.548055  594803 start.go:241] waiting for startup goroutines ...
	I1027 19:41:11.548065  594803 start.go:246] waiting for cluster config update ...
	I1027 19:41:11.548086  594803 start.go:255] writing updated cluster config ...
	I1027 19:41:11.548374  594803 ssh_runner.go:195] Run: rm -f paused
	I1027 19:41:11.552537  594803 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 19:41:11.557023  594803 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9b9tz" in "kube-system" namespace to be "Ready" or be gone ...
	W1027 19:41:13.563484  594803 pod_ready.go:104] pod "coredns-66bc5c9577-9b9tz" is not "Ready", error: <nil>
	W1027 19:41:15.563819  594803 pod_ready.go:104] pod "coredns-66bc5c9577-9b9tz" is not "Ready", error: <nil>
	I1027 19:41:13.939593  565798 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1027 19:41:13.940042  565798 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1027 19:41:13.940102  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:41:13.940198  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:41:13.972381  565798 cri.go:89] found id: "f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:41:13.972407  565798 cri.go:89] found id: ""
	I1027 19:41:13.972418  565798 logs.go:282] 1 containers: [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8]
	I1027 19:41:13.972486  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:13.977539  565798 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:41:13.977622  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:41:14.020551  565798 cri.go:89] found id: ""
	I1027 19:41:14.020587  565798 logs.go:282] 0 containers: []
	W1027 19:41:14.020598  565798 logs.go:284] No container was found matching "etcd"
	I1027 19:41:14.020606  565798 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:41:14.020669  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:41:14.053870  565798 cri.go:89] found id: ""
	I1027 19:41:14.053900  565798 logs.go:282] 0 containers: []
	W1027 19:41:14.053915  565798 logs.go:284] No container was found matching "coredns"
	I1027 19:41:14.053930  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:41:14.053994  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:41:14.093774  565798 cri.go:89] found id: "15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:41:14.093801  565798 cri.go:89] found id: ""
	I1027 19:41:14.093814  565798 logs.go:282] 1 containers: [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8]
	I1027 19:41:14.093881  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:14.099100  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:41:14.099192  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:41:14.130319  565798 cri.go:89] found id: ""
	I1027 19:41:14.130350  565798 logs.go:282] 0 containers: []
	W1027 19:41:14.130362  565798 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:41:14.130370  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:41:14.130447  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:41:14.162946  565798 cri.go:89] found id: "38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:14.162968  565798 cri.go:89] found id: ""
	I1027 19:41:14.162976  565798 logs.go:282] 1 containers: [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77]
	I1027 19:41:14.163028  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:14.167526  565798 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:41:14.167603  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:41:14.200055  565798 cri.go:89] found id: ""
	I1027 19:41:14.200084  565798 logs.go:282] 0 containers: []
	W1027 19:41:14.200095  565798 logs.go:284] No container was found matching "kindnet"
	I1027 19:41:14.200103  565798 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:41:14.200182  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:41:14.232443  565798 cri.go:89] found id: ""
	I1027 19:41:14.232466  565798 logs.go:282] 0 containers: []
	W1027 19:41:14.232477  565798 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:41:14.232490  565798 logs.go:123] Gathering logs for kubelet ...
	I1027 19:41:14.232508  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:41:14.372190  565798 logs.go:123] Gathering logs for dmesg ...
	I1027 19:41:14.372225  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:41:14.400370  565798 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:41:14.400410  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:41:14.467768  565798 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:41:14.467783  565798 logs.go:123] Gathering logs for kube-apiserver [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8] ...
	I1027 19:41:14.467801  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:41:14.508998  565798 logs.go:123] Gathering logs for kube-scheduler [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8] ...
	I1027 19:41:14.509034  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:41:14.585373  565798 logs.go:123] Gathering logs for kube-controller-manager [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77] ...
	I1027 19:41:14.585429  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:14.635462  565798 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:41:14.635505  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:41:14.712478  565798 logs.go:123] Gathering logs for container status ...
	I1027 19:41:14.712529  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	
	
	==> CRI-O <==
	Oct 27 19:40:44 old-k8s-version-468959 crio[559]: time="2025-10-27T19:40:44.012903179Z" level=info msg="Started container" PID=1711 containerID=b9692750c6802429c9250e188f4cf6dc0f0f123f6df32b84aa4a245a6bd40e60 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r6m7z/dashboard-metrics-scraper id=6a49e85b-09b5-4b56-8b92-37088888886a name=/runtime.v1.RuntimeService/StartContainer sandboxID=4c06fe9042f844f8cc92426ff042906b7930e5890f0ce1c496f1bef4d7484525
	Oct 27 19:40:44 old-k8s-version-468959 crio[559]: time="2025-10-27T19:40:44.971208085Z" level=info msg="Removing container: 88a7fe8d90dc09d19e5b3221783bb4d018b72eab2e09644a80d5946dc283df4f" id=d896d53b-3c84-4722-a797-5b8faa113adc name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 19:40:44 old-k8s-version-468959 crio[559]: time="2025-10-27T19:40:44.981119884Z" level=info msg="Removed container 88a7fe8d90dc09d19e5b3221783bb4d018b72eab2e09644a80d5946dc283df4f: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r6m7z/dashboard-metrics-scraper" id=d896d53b-3c84-4722-a797-5b8faa113adc name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 19:40:53 old-k8s-version-468959 crio[559]: time="2025-10-27T19:40:53.998944519Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=fafbc5a4-3b86-42e7-9fb0-8414a7e3c841 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:40:53 old-k8s-version-468959 crio[559]: time="2025-10-27T19:40:53.999951803Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=50ee6f10-5b6d-436f-8d3e-5255248156f2 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:40:54 old-k8s-version-468959 crio[559]: time="2025-10-27T19:40:54.001075021Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=768c9b66-4c3e-4313-9356-b9a6c081ab7d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:40:54 old-k8s-version-468959 crio[559]: time="2025-10-27T19:40:54.001241918Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:40:54 old-k8s-version-468959 crio[559]: time="2025-10-27T19:40:54.005557701Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:40:54 old-k8s-version-468959 crio[559]: time="2025-10-27T19:40:54.005722045Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e4e43d260d563103604459ec80968feac7c8fb32183b206786ae33286baf8194/merged/etc/passwd: no such file or directory"
	Oct 27 19:40:54 old-k8s-version-468959 crio[559]: time="2025-10-27T19:40:54.005756969Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e4e43d260d563103604459ec80968feac7c8fb32183b206786ae33286baf8194/merged/etc/group: no such file or directory"
	Oct 27 19:40:54 old-k8s-version-468959 crio[559]: time="2025-10-27T19:40:54.006024507Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:40:54 old-k8s-version-468959 crio[559]: time="2025-10-27T19:40:54.032018891Z" level=info msg="Created container c3d66e2dd322da5d8554d09ea3b176065c6fe4ba6f6c1b0ca6612474fc69cd91: kube-system/storage-provisioner/storage-provisioner" id=768c9b66-4c3e-4313-9356-b9a6c081ab7d name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:40:54 old-k8s-version-468959 crio[559]: time="2025-10-27T19:40:54.032760063Z" level=info msg="Starting container: c3d66e2dd322da5d8554d09ea3b176065c6fe4ba6f6c1b0ca6612474fc69cd91" id=40f69b43-afb8-41b0-9211-2b588732b30a name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:40:54 old-k8s-version-468959 crio[559]: time="2025-10-27T19:40:54.034751277Z" level=info msg="Started container" PID=1725 containerID=c3d66e2dd322da5d8554d09ea3b176065c6fe4ba6f6c1b0ca6612474fc69cd91 description=kube-system/storage-provisioner/storage-provisioner id=40f69b43-afb8-41b0-9211-2b588732b30a name=/runtime.v1.RuntimeService/StartContainer sandboxID=63843b39a74258d7067907dc8e5efbf510e1bcf9cb69eec1e73c46a76826e306
	Oct 27 19:41:00 old-k8s-version-468959 crio[559]: time="2025-10-27T19:41:00.8481855Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=199f133c-154c-4b1d-8820-eccce23ac539 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:41:00 old-k8s-version-468959 crio[559]: time="2025-10-27T19:41:00.849522098Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=dcb67d5d-4471-4ea2-9339-fc408698e879 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:41:00 old-k8s-version-468959 crio[559]: time="2025-10-27T19:41:00.850962902Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r6m7z/dashboard-metrics-scraper" id=5370fdc5-fd91-4381-97e1-bebdcd568dc2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:41:00 old-k8s-version-468959 crio[559]: time="2025-10-27T19:41:00.851162264Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:41:00 old-k8s-version-468959 crio[559]: time="2025-10-27T19:41:00.85928152Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:41:00 old-k8s-version-468959 crio[559]: time="2025-10-27T19:41:00.859966326Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:41:00 old-k8s-version-468959 crio[559]: time="2025-10-27T19:41:00.891422673Z" level=info msg="Created container f90740a0e28b478c1a0658aadb18b23d89ba64b844c2ab857f4e83834b57f69b: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r6m7z/dashboard-metrics-scraper" id=5370fdc5-fd91-4381-97e1-bebdcd568dc2 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:41:00 old-k8s-version-468959 crio[559]: time="2025-10-27T19:41:00.893076636Z" level=info msg="Starting container: f90740a0e28b478c1a0658aadb18b23d89ba64b844c2ab857f4e83834b57f69b" id=754aad9a-e851-499d-8fb7-4a6554e69ebc name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:41:00 old-k8s-version-468959 crio[559]: time="2025-10-27T19:41:00.896190457Z" level=info msg="Started container" PID=1759 containerID=f90740a0e28b478c1a0658aadb18b23d89ba64b844c2ab857f4e83834b57f69b description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r6m7z/dashboard-metrics-scraper id=754aad9a-e851-499d-8fb7-4a6554e69ebc name=/runtime.v1.RuntimeService/StartContainer sandboxID=4c06fe9042f844f8cc92426ff042906b7930e5890f0ce1c496f1bef4d7484525
	Oct 27 19:41:01 old-k8s-version-468959 crio[559]: time="2025-10-27T19:41:01.0227682Z" level=info msg="Removing container: b9692750c6802429c9250e188f4cf6dc0f0f123f6df32b84aa4a245a6bd40e60" id=3490c600-37c2-4de8-9d01-b33c24c39cc9 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 19:41:01 old-k8s-version-468959 crio[559]: time="2025-10-27T19:41:01.036901688Z" level=info msg="Removed container b9692750c6802429c9250e188f4cf6dc0f0f123f6df32b84aa4a245a6bd40e60: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r6m7z/dashboard-metrics-scraper" id=3490c600-37c2-4de8-9d01-b33c24c39cc9 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	f90740a0e28b4       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           17 seconds ago      Exited              dashboard-metrics-scraper   2                   4c06fe9042f84       dashboard-metrics-scraper-5f989dc9cf-r6m7z       kubernetes-dashboard
	c3d66e2dd322d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   63843b39a7425       storage-provisioner                              kube-system
	12d4f512371d8       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   38 seconds ago      Running             kubernetes-dashboard        0                   7e18f17c481dd       kubernetes-dashboard-8694d4445c-mb5fm            kubernetes-dashboard
	2e436d82f10c9       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           55 seconds ago      Running             coredns                     0                   10f57f4658683       coredns-5dd5756b68-xwmdt                         kube-system
	32ab77e9658d7       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   f9388ad762f5b       kindnet-td5zb                                    kube-system
	b0e7588da17af       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   868ff34ed020a       busybox                                          default
	2f249517b99ac       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   63843b39a7425       storage-provisioner                              kube-system
	b928c935db399       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           55 seconds ago      Running             kube-proxy                  0                   df65001b83cda       kube-proxy-tjbth                                 kube-system
	bbf4fe7bcb1ee       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           57 seconds ago      Running             etcd                        0                   cff067560d1de       etcd-old-k8s-version-468959                      kube-system
	07e72855c00ee       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           57 seconds ago      Running             kube-apiserver              0                   576f1b92ea461       kube-apiserver-old-k8s-version-468959            kube-system
	ef7e54548205b       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           57 seconds ago      Running             kube-scheduler              0                   2b08074662e53       kube-scheduler-old-k8s-version-468959            kube-system
	1415820809db8       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           57 seconds ago      Running             kube-controller-manager     0                   6a67ba3219763       kube-controller-manager-old-k8s-version-468959   kube-system
	
	
	==> coredns [2e436d82f10c9ab337c97fc80696a734a66eb15691f23ff94fdd4ad91ff89df5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46993 - 8781 "HINFO IN 7570531223480349424.7523887348228158236. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.087703845s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-468959
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-468959
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=old-k8s-version-468959
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T19_39_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 19:39:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-468959
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 19:41:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 19:40:53 +0000   Mon, 27 Oct 2025 19:39:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 19:40:53 +0000   Mon, 27 Oct 2025 19:39:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 19:40:53 +0000   Mon, 27 Oct 2025 19:39:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 19:40:53 +0000   Mon, 27 Oct 2025 19:39:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-468959
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                2befee2f-4a53-4846-b84d-35620b9685cc
	  Boot ID:                    811bd29c-e64e-4acc-9427-bab1f7caed93
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-5dd5756b68-xwmdt                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 etcd-old-k8s-version-468959                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m3s
	  kube-system                 kindnet-td5zb                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-old-k8s-version-468959             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-controller-manager-old-k8s-version-468959    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-proxy-tjbth                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-old-k8s-version-468959             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-r6m7z        0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-mb5fm             0 (0%)        0 (0%)      0 (0%)           0 (0%)         43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 110s               kube-proxy       
	  Normal  Starting                 55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m3s               kubelet          Node old-k8s-version-468959 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s               kubelet          Node old-k8s-version-468959 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s               kubelet          Node old-k8s-version-468959 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m3s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s               node-controller  Node old-k8s-version-468959 event: Registered Node old-k8s-version-468959 in Controller
	  Normal  NodeReady                97s                kubelet          Node old-k8s-version-468959 status is now: NodeReady
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)  kubelet          Node old-k8s-version-468959 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)  kubelet          Node old-k8s-version-468959 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)  kubelet          Node old-k8s-version-468959 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           43s                node-controller  Node old-k8s-version-468959 event: Registered Node old-k8s-version-468959 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 23 52 43 9a ba 08 06
	[  +0.000398] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 50 95 0e df 53 08 06
	[Oct27 18:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.017295] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023893] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023882] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023851] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +2.047849] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +4.031592] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +8.319143] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[ +16.382183] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[Oct27 19:00] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	
	
	==> etcd [bbf4fe7bcb1eef6c19d02157f5f9d45ada6d926195550b86406cb27a478cb520] <==
	{"level":"info","ts":"2025-10-27T19:40:20.462606Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-10-27T19:40:20.462678Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-10-27T19:40:20.462766Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-27T19:40:20.462795Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-27T19:40:20.465695Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-27T19:40:20.465935Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-27T19:40:20.465964Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-27T19:40:20.466022Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-27T19:40:20.466108Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-10-27T19:40:20.466603Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"9f0758e1c58a86ed","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-10-27T19:40:21.25022Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-27T19:40:21.250411Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-27T19:40:21.250448Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-10-27T19:40:21.250471Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-10-27T19:40:21.250481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-27T19:40:21.250492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-10-27T19:40:21.250502Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-10-27T19:40:21.252271Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-468959 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-27T19:40:21.252371Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-27T19:40:21.252576Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-27T19:40:21.252842Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-27T19:40:21.252616Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-27T19:40:21.256986Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-10-27T19:40:21.258017Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-27T19:40:30.116082Z","caller":"traceutil/trace.go:171","msg":"trace[1590140253] transaction","detail":"{read_only:false; response_revision:472; number_of_response:1; }","duration":"105.616202ms","start":"2025-10-27T19:40:30.01043Z","end":"2025-10-27T19:40:30.116046Z","steps":["trace[1590140253] 'process raft request'  (duration: 49.766453ms)","trace[1590140253] 'compare'  (duration: 55.717221ms)"],"step_count":2}
	
	
	==> kernel <==
	 19:41:18 up  2:23,  0 user,  load average: 2.56, 3.01, 2.01
	Linux old-k8s-version-468959 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [32ab77e9658d711ddb17ba898beed6884dc70565b485a14e92a38be93a33d1da] <==
	I1027 19:40:23.495086       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 19:40:23.495435       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1027 19:40:23.495611       1 main.go:148] setting mtu 1500 for CNI 
	I1027 19:40:23.495629       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 19:40:23.495647       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T19:40:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 19:40:23.750348       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 19:40:23.750384       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 19:40:23.750396       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 19:40:23.750773       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1027 19:40:24.051002       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 19:40:24.051035       1 metrics.go:72] Registering metrics
	I1027 19:40:24.051109       1 controller.go:711] "Syncing nftables rules"
	I1027 19:40:33.702439       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 19:40:33.702512       1 main.go:301] handling current node
	I1027 19:40:43.701255       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 19:40:43.701310       1 main.go:301] handling current node
	I1027 19:40:53.701209       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 19:40:53.701280       1 main.go:301] handling current node
	I1027 19:41:03.701244       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 19:41:03.701287       1 main.go:301] handling current node
	I1027 19:41:13.702993       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 19:41:13.703043       1 main.go:301] handling current node
	
	
	==> kube-apiserver [07e72855c00ee996d65390930e95dec1dbf22e238c37a44a46a98ed17c3b0651] <==
	I1027 19:40:22.735773       1 apf_controller.go:372] Starting API Priority and Fairness config controller
	I1027 19:40:22.795563       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 19:40:22.833024       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1027 19:40:22.834222       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1027 19:40:22.834717       1 shared_informer.go:318] Caches are synced for configmaps
	I1027 19:40:22.834725       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1027 19:40:22.834742       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1027 19:40:22.834959       1 aggregator.go:166] initial CRD sync complete...
	I1027 19:40:22.834970       1 autoregister_controller.go:141] Starting autoregister controller
	I1027 19:40:22.834977       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 19:40:22.834986       1 cache.go:39] Caches are synced for autoregister controller
	I1027 19:40:22.835979       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1027 19:40:22.836008       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1027 19:40:22.859560       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1027 19:40:23.737706       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 19:40:23.761435       1 controller.go:624] quota admission added evaluator for: namespaces
	I1027 19:40:23.798416       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1027 19:40:23.820098       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 19:40:23.831196       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 19:40:23.845740       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1027 19:40:23.925856       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.116.249"}
	I1027 19:40:23.989744       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.192.230"}
	I1027 19:40:35.069098       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1027 19:40:35.518922       1 controller.go:624] quota admission added evaluator for: endpoints
	I1027 19:40:35.570388       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [1415820809db89899722d08ef65bea69fc0e930dddf7cc3246da3d0cf8f8ca35] <==
	I1027 19:40:35.475813       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.429µs"
	I1027 19:40:35.476905       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-mb5fm"
	I1027 19:40:35.480011       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-r6m7z"
	I1027 19:40:35.489271       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="415.294345ms"
	I1027 19:40:35.489598       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="416.723355ms"
	I1027 19:40:35.495538       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="5.81027ms"
	I1027 19:40:35.495648       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="61.673µs"
	I1027 19:40:35.498563       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="64.852µs"
	I1027 19:40:35.501498       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="12.151382ms"
	I1027 19:40:35.501599       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.573µs"
	I1027 19:40:35.501643       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="27.057µs"
	I1027 19:40:35.513063       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="68.037µs"
	I1027 19:40:35.576337       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	I1027 19:40:35.595441       1 shared_informer.go:318] Caches are synced for garbage collector
	I1027 19:40:35.626947       1 shared_informer.go:318] Caches are synced for garbage collector
	I1027 19:40:35.626980       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1027 19:40:40.996226       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="22.796418ms"
	I1027 19:40:40.996345       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="57.183µs"
	I1027 19:40:43.981035       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="59.837µs"
	I1027 19:40:44.982095       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.046µs"
	I1027 19:40:45.988026       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="140.256µs"
	I1027 19:40:58.963202       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.104767ms"
	I1027 19:40:58.963340       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="81.542µs"
	I1027 19:41:01.035018       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="70.99µs"
	I1027 19:41:05.799533       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="124.267µs"
	
	
	==> kube-proxy [b928c935db3996d4e2c0bd1959759b9d8b29154925458393549fc24c4cf387fb] <==
	I1027 19:40:23.263631       1 server_others.go:69] "Using iptables proxy"
	I1027 19:40:23.274997       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1027 19:40:23.299722       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 19:40:23.302779       1 server_others.go:152] "Using iptables Proxier"
	I1027 19:40:23.302822       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1027 19:40:23.302833       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1027 19:40:23.302893       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1027 19:40:23.303245       1 server.go:846] "Version info" version="v1.28.0"
	I1027 19:40:23.303346       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:40:23.304057       1 config.go:188] "Starting service config controller"
	I1027 19:40:23.304497       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1027 19:40:23.304075       1 config.go:97] "Starting endpoint slice config controller"
	I1027 19:40:23.304150       1 config.go:315] "Starting node config controller"
	I1027 19:40:23.307446       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1027 19:40:23.307493       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1027 19:40:23.408199       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1027 19:40:23.408306       1 shared_informer.go:318] Caches are synced for service config
	I1027 19:40:23.408228       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [ef7e54548205b2d8355417aebc97fb016764235b2b1f28d56a8dd8368f3a58d8] <==
	I1027 19:40:20.953035       1 serving.go:348] Generated self-signed cert in-memory
	W1027 19:40:22.778808       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1027 19:40:22.778851       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1027 19:40:22.778868       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1027 19:40:22.778878       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1027 19:40:22.800852       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1027 19:40:22.800890       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:40:22.803363       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:40:22.803404       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1027 19:40:22.807681       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1027 19:40:22.807962       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1027 19:40:22.906335       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 27 19:40:35 old-k8s-version-468959 kubelet[708]: I1027 19:40:35.485716     708 topology_manager.go:215] "Topology Admit Handler" podUID="1c0e5f44-78ae-4b68-8df4-33d4ff6c4980" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-r6m7z"
	Oct 27 19:40:35 old-k8s-version-468959 kubelet[708]: I1027 19:40:35.568511     708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/aa553d39-b345-4aaa-badc-a7f124972284-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-mb5fm\" (UID: \"aa553d39-b345-4aaa-badc-a7f124972284\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mb5fm"
	Oct 27 19:40:35 old-k8s-version-468959 kubelet[708]: I1027 19:40:35.568712     708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hndz\" (UniqueName: \"kubernetes.io/projected/1c0e5f44-78ae-4b68-8df4-33d4ff6c4980-kube-api-access-5hndz\") pod \"dashboard-metrics-scraper-5f989dc9cf-r6m7z\" (UID: \"1c0e5f44-78ae-4b68-8df4-33d4ff6c4980\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r6m7z"
	Oct 27 19:40:35 old-k8s-version-468959 kubelet[708]: I1027 19:40:35.568764     708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1c0e5f44-78ae-4b68-8df4-33d4ff6c4980-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-r6m7z\" (UID: \"1c0e5f44-78ae-4b68-8df4-33d4ff6c4980\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r6m7z"
	Oct 27 19:40:35 old-k8s-version-468959 kubelet[708]: I1027 19:40:35.568885     708 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4zcd\" (UniqueName: \"kubernetes.io/projected/aa553d39-b345-4aaa-badc-a7f124972284-kube-api-access-w4zcd\") pod \"kubernetes-dashboard-8694d4445c-mb5fm\" (UID: \"aa553d39-b345-4aaa-badc-a7f124972284\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mb5fm"
	Oct 27 19:40:43 old-k8s-version-468959 kubelet[708]: I1027 19:40:43.965854     708 scope.go:117] "RemoveContainer" containerID="88a7fe8d90dc09d19e5b3221783bb4d018b72eab2e09644a80d5946dc283df4f"
	Oct 27 19:40:43 old-k8s-version-468959 kubelet[708]: I1027 19:40:43.981030     708 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-mb5fm" podStartSLOduration=4.638243419 podCreationTimestamp="2025-10-27 19:40:35 +0000 UTC" firstStartedPulling="2025-10-27 19:40:35.809035922 +0000 UTC m=+16.112745207" lastFinishedPulling="2025-10-27 19:40:40.151742571 +0000 UTC m=+20.455451865" observedRunningTime="2025-10-27 19:40:40.974701804 +0000 UTC m=+21.278411106" watchObservedRunningTime="2025-10-27 19:40:43.980950077 +0000 UTC m=+24.284659373"
	Oct 27 19:40:44 old-k8s-version-468959 kubelet[708]: I1027 19:40:44.969822     708 scope.go:117] "RemoveContainer" containerID="88a7fe8d90dc09d19e5b3221783bb4d018b72eab2e09644a80d5946dc283df4f"
	Oct 27 19:40:44 old-k8s-version-468959 kubelet[708]: I1027 19:40:44.969997     708 scope.go:117] "RemoveContainer" containerID="b9692750c6802429c9250e188f4cf6dc0f0f123f6df32b84aa4a245a6bd40e60"
	Oct 27 19:40:44 old-k8s-version-468959 kubelet[708]: E1027 19:40:44.970416     708 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-r6m7z_kubernetes-dashboard(1c0e5f44-78ae-4b68-8df4-33d4ff6c4980)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r6m7z" podUID="1c0e5f44-78ae-4b68-8df4-33d4ff6c4980"
	Oct 27 19:40:45 old-k8s-version-468959 kubelet[708]: I1027 19:40:45.976965     708 scope.go:117] "RemoveContainer" containerID="b9692750c6802429c9250e188f4cf6dc0f0f123f6df32b84aa4a245a6bd40e60"
	Oct 27 19:40:45 old-k8s-version-468959 kubelet[708]: E1027 19:40:45.977272     708 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-r6m7z_kubernetes-dashboard(1c0e5f44-78ae-4b68-8df4-33d4ff6c4980)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r6m7z" podUID="1c0e5f44-78ae-4b68-8df4-33d4ff6c4980"
	Oct 27 19:40:46 old-k8s-version-468959 kubelet[708]: I1027 19:40:46.979967     708 scope.go:117] "RemoveContainer" containerID="b9692750c6802429c9250e188f4cf6dc0f0f123f6df32b84aa4a245a6bd40e60"
	Oct 27 19:40:46 old-k8s-version-468959 kubelet[708]: E1027 19:40:46.980247     708 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-r6m7z_kubernetes-dashboard(1c0e5f44-78ae-4b68-8df4-33d4ff6c4980)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r6m7z" podUID="1c0e5f44-78ae-4b68-8df4-33d4ff6c4980"
	Oct 27 19:40:53 old-k8s-version-468959 kubelet[708]: I1027 19:40:53.998389     708 scope.go:117] "RemoveContainer" containerID="2f249517b99aca10f8d7cbf2e67e155472a7f47554aaf0bd3f1fe9dc0c41d3f7"
	Oct 27 19:41:00 old-k8s-version-468959 kubelet[708]: I1027 19:41:00.847354     708 scope.go:117] "RemoveContainer" containerID="b9692750c6802429c9250e188f4cf6dc0f0f123f6df32b84aa4a245a6bd40e60"
	Oct 27 19:41:01 old-k8s-version-468959 kubelet[708]: I1027 19:41:01.021375     708 scope.go:117] "RemoveContainer" containerID="b9692750c6802429c9250e188f4cf6dc0f0f123f6df32b84aa4a245a6bd40e60"
	Oct 27 19:41:01 old-k8s-version-468959 kubelet[708]: I1027 19:41:01.021613     708 scope.go:117] "RemoveContainer" containerID="f90740a0e28b478c1a0658aadb18b23d89ba64b844c2ab857f4e83834b57f69b"
	Oct 27 19:41:01 old-k8s-version-468959 kubelet[708]: E1027 19:41:01.022006     708 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-r6m7z_kubernetes-dashboard(1c0e5f44-78ae-4b68-8df4-33d4ff6c4980)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r6m7z" podUID="1c0e5f44-78ae-4b68-8df4-33d4ff6c4980"
	Oct 27 19:41:05 old-k8s-version-468959 kubelet[708]: I1027 19:41:05.788435     708 scope.go:117] "RemoveContainer" containerID="f90740a0e28b478c1a0658aadb18b23d89ba64b844c2ab857f4e83834b57f69b"
	Oct 27 19:41:05 old-k8s-version-468959 kubelet[708]: E1027 19:41:05.788759     708 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-r6m7z_kubernetes-dashboard(1c0e5f44-78ae-4b68-8df4-33d4ff6c4980)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-r6m7z" podUID="1c0e5f44-78ae-4b68-8df4-33d4ff6c4980"
	Oct 27 19:41:12 old-k8s-version-468959 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 19:41:12 old-k8s-version-468959 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 19:41:12 old-k8s-version-468959 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 27 19:41:12 old-k8s-version-468959 systemd[1]: kubelet.service: Consumed 1.722s CPU time.
	
	
	==> kubernetes-dashboard [12d4f512371d8f5ce0f213cf3965c8a627febbdcc48831c69b8f3313bbdf87af] <==
	2025/10/27 19:40:40 Using namespace: kubernetes-dashboard
	2025/10/27 19:40:40 Using in-cluster config to connect to apiserver
	2025/10/27 19:40:40 Using secret token for csrf signing
	2025/10/27 19:40:40 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/27 19:40:40 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/27 19:40:40 Successful initial request to the apiserver, version: v1.28.0
	2025/10/27 19:40:40 Generating JWE encryption key
	2025/10/27 19:40:40 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/27 19:40:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/27 19:40:40 Initializing JWE encryption key from synchronized object
	2025/10/27 19:40:40 Creating in-cluster Sidecar client
	2025/10/27 19:40:40 Serving insecurely on HTTP port: 9090
	2025/10/27 19:40:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 19:41:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 19:40:40 Starting overwatch
	
	
	==> storage-provisioner [2f249517b99aca10f8d7cbf2e67e155472a7f47554aaf0bd3f1fe9dc0c41d3f7] <==
	I1027 19:40:23.207718       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1027 19:40:53.211754       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [c3d66e2dd322da5d8554d09ea3b176065c6fe4ba6f6c1b0ca6612474fc69cd91] <==
	I1027 19:40:54.047329       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1027 19:40:54.055847       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 19:40:54.055888       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1027 19:41:11.457368       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 19:41:11.457524       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"677bc0f8-1050-43ba-894e-0ebdacb32030", APIVersion:"v1", ResourceVersion:"623", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-468959_5168af45-d2fa-46b2-bc4a-7e149f799f2c became leader
	I1027 19:41:11.457604       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-468959_5168af45-d2fa-46b2-bc4a-7e149f799f2c!
	I1027 19:41:11.558187       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-468959_5168af45-d2fa-46b2-bc4a-7e149f799f2c!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-468959 -n old-k8s-version-468959
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-468959 -n old-k8s-version-468959: exit status 2 (405.492903ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-468959 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (7.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-095885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-095885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (272.298272ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:41:13Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-095885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-095885 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-095885 describe deploy/metrics-server -n kube-system: exit status 1 (70.82157ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-095885 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-095885
helpers_test.go:243: (dbg) docker inspect no-preload-095885:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4cc5fd138a234f7595c3ab65ba5a1ba3edb67bef1c67cdf1d9cf853e33a19613",
	        "Created": "2025-10-27T19:40:14.994574328Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 586200,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T19:40:15.033608252Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/4cc5fd138a234f7595c3ab65ba5a1ba3edb67bef1c67cdf1d9cf853e33a19613/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4cc5fd138a234f7595c3ab65ba5a1ba3edb67bef1c67cdf1d9cf853e33a19613/hostname",
	        "HostsPath": "/var/lib/docker/containers/4cc5fd138a234f7595c3ab65ba5a1ba3edb67bef1c67cdf1d9cf853e33a19613/hosts",
	        "LogPath": "/var/lib/docker/containers/4cc5fd138a234f7595c3ab65ba5a1ba3edb67bef1c67cdf1d9cf853e33a19613/4cc5fd138a234f7595c3ab65ba5a1ba3edb67bef1c67cdf1d9cf853e33a19613-json.log",
	        "Name": "/no-preload-095885",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-095885:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-095885",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4cc5fd138a234f7595c3ab65ba5a1ba3edb67bef1c67cdf1d9cf853e33a19613",
	                "LowerDir": "/var/lib/docker/overlay2/3da4c71b650bdf8fc78ee58176e8542686fb887dd144b15140026baa7af00784-init/diff:/var/lib/docker/overlay2/71b61ec94610a35f2d924dec358052d4c154c36b3fe219802f60246ca2dc7f45/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3da4c71b650bdf8fc78ee58176e8542686fb887dd144b15140026baa7af00784/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3da4c71b650bdf8fc78ee58176e8542686fb887dd144b15140026baa7af00784/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3da4c71b650bdf8fc78ee58176e8542686fb887dd144b15140026baa7af00784/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-095885",
	                "Source": "/var/lib/docker/volumes/no-preload-095885/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-095885",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-095885",
	                "name.minikube.sigs.k8s.io": "no-preload-095885",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7e8eea65fa9f6f408a7750f0395f7b443d4f33481824e983027a3926e1aea3ff",
	            "SandboxKey": "/var/run/docker/netns/7e8eea65fa9f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-095885": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c2:d9:d3:b3:80:88",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0e1134f19412aeb25ca458bad13821f54c33ad8f2fba3617f69283b33058934f",
	                    "EndpointID": "5137495c7c9f1dbc4ca2403726af5b108ce036f7792782aa2e702bc5cd56fb81",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-095885",
	                        "4cc5fd138a23"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-095885 -n no-preload-095885
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-095885 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-095885 logs -n 25: (1.423917179s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                              ARGS                                                                               │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount     │ -p functional-051715 --kill=true                                                                                                                                │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │                     │
	│ ssh       │ functional-051715 ssh echo hello                                                                                                                                │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ ssh       │ functional-051715 ssh cat /etc/hostname                                                                                                                         │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ tunnel    │ functional-051715 tunnel --alsologtostderr                                                                                                                      │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │                     │
	│ tunnel    │ functional-051715 tunnel --alsologtostderr                                                                                                                      │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │                     │
	│ stop      │ -p embed-certs-919237 --alsologtostderr -v=3                                                                                                                    │ embed-certs-919237     │ jenkins │ v1.37.0 │ 27 Oct 25 19:40 UTC │ 27 Oct 25 19:41 UTC │
	│ tunnel    │ functional-051715 tunnel --alsologtostderr                                                                                                                      │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-051715 --alsologtostderr -v=1                                                                                                  │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ start     │ -p functional-051715 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                                       │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │                     │
	│ start     │ -p functional-051715 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                 │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │                     │
	│ addons    │ functional-051715 addons list                                                                                                                                   │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ addons    │ functional-051715 addons list -o json                                                                                                                           │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image     │ functional-051715 image load --daemon kicbase/echo-server:functional-051715 --alsologtostderr                                                                   │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image     │ functional-051715 image ls                                                                                                                                      │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image     │ functional-051715 image load --daemon kicbase/echo-server:functional-051715 --alsologtostderr                                                                   │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image     │ functional-051715 image ls                                                                                                                                      │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image     │ functional-051715 image load --daemon kicbase/echo-server:functional-051715 --alsologtostderr                                                                   │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image     │ functional-051715 image ls                                                                                                                                      │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image     │ functional-051715 image save kicbase/echo-server:functional-051715 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image     │ functional-051715 image rm kicbase/echo-server:functional-051715 --alsologtostderr                                                                              │ functional-051715      │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ addons    │ enable dashboard -p embed-certs-919237 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                   │ embed-certs-919237     │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ start     │ -p embed-certs-919237 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1          │ embed-certs-919237     │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │                     │
	│ image     │ old-k8s-version-468959 image list --format=json                                                                                                                 │ old-k8s-version-468959 │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ pause     │ -p old-k8s-version-468959 --alsologtostderr -v=1                                                                                                                │ old-k8s-version-468959 │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │                     │
	│ addons    │ enable metrics-server -p no-preload-095885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                         │ no-preload-095885      │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │                     │
	└───────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 19:41:00
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 19:41:00.814297  594803 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:41:00.814654  594803 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:41:00.814666  594803 out.go:374] Setting ErrFile to fd 2...
	I1027 19:41:00.814672  594803 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:41:00.815019  594803 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 19:41:00.815611  594803 out.go:368] Setting JSON to false
	I1027 19:41:00.819938  594803 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8610,"bootTime":1761585451,"procs":357,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 19:41:00.820105  594803 start.go:141] virtualization: kvm guest
	I1027 19:41:00.822276  594803 out.go:179] * [embed-certs-919237] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 19:41:00.824552  594803 notify.go:220] Checking for updates...
	I1027 19:41:00.824589  594803 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 19:41:00.825920  594803 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 19:41:00.827493  594803 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:41:00.829068  594803 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-352833/.minikube
	I1027 19:41:00.830346  594803 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 19:41:00.831676  594803 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 19:41:00.833634  594803 config.go:182] Loaded profile config "embed-certs-919237": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:41:00.834328  594803 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 19:41:00.865817  594803 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 19:41:00.865940  594803 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:41:00.939681  594803 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-27 19:41:00.928512266 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:41:00.939791  594803 docker.go:318] overlay module found
	I1027 19:41:00.942901  594803 out.go:179] * Using the docker driver based on existing profile
	I1027 19:41:00.944254  594803 start.go:305] selected driver: docker
	I1027 19:41:00.944276  594803 start.go:925] validating driver "docker" against &{Name:embed-certs-919237 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-919237 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:41:00.944438  594803 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 19:41:00.945045  594803 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:41:01.009596  594803 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-27 19:41:00.998454107 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:41:01.009899  594803 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 19:41:01.009935  594803 cni.go:84] Creating CNI manager for ""
	I1027 19:41:01.009994  594803 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:41:01.010033  594803 start.go:349] cluster config:
	{Name:embed-certs-919237 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-919237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:41:01.012102  594803 out.go:179] * Starting "embed-certs-919237" primary control-plane node in "embed-certs-919237" cluster
	I1027 19:41:01.013642  594803 cache.go:123] Beginning downloading kic base image for docker with crio
	I1027 19:41:01.015027  594803 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 19:41:01.016245  594803 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:41:01.016338  594803 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1027 19:41:01.016364  594803 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 19:41:01.016374  594803 cache.go:58] Caching tarball of preloaded images
	I1027 19:41:01.016491  594803 preload.go:233] Found /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1027 19:41:01.016508  594803 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 19:41:01.016671  594803 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/embed-certs-919237/config.json ...
	I1027 19:41:01.043736  594803 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 19:41:01.043771  594803 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 19:41:01.043794  594803 cache.go:232] Successfully downloaded all kic artifacts
	I1027 19:41:01.043828  594803 start.go:360] acquireMachinesLock for embed-certs-919237: {Name:mka6dd5e9788015cfc40a76e0480af6167e6c17e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:41:01.043925  594803 start.go:364] duration metric: took 53.412µs to acquireMachinesLock for "embed-certs-919237"
	I1027 19:41:01.043948  594803 start.go:96] Skipping create...Using existing machine configuration
	I1027 19:41:01.043956  594803 fix.go:54] fixHost starting: 
	I1027 19:41:01.044294  594803 cli_runner.go:164] Run: docker container inspect embed-certs-919237 --format={{.State.Status}}
	I1027 19:41:01.063875  594803 fix.go:112] recreateIfNeeded on embed-certs-919237: state=Stopped err=<nil>
	W1027 19:41:01.063922  594803 fix.go:138] unexpected machine state, will restart: <nil>
	I1027 19:40:58.026030  565798 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1027 19:40:58.026613  565798 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1027 19:40:58.026685  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:40:58.026737  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:40:58.057129  565798 cri.go:89] found id: "f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:40:58.057167  565798 cri.go:89] found id: ""
	I1027 19:40:58.057177  565798 logs.go:282] 1 containers: [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8]
	I1027 19:40:58.057246  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:40:58.061704  565798 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:40:58.061775  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:40:58.090405  565798 cri.go:89] found id: ""
	I1027 19:40:58.090438  565798 logs.go:282] 0 containers: []
	W1027 19:40:58.090450  565798 logs.go:284] No container was found matching "etcd"
	I1027 19:40:58.090459  565798 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:40:58.090524  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:40:58.120023  565798 cri.go:89] found id: ""
	I1027 19:40:58.120053  565798 logs.go:282] 0 containers: []
	W1027 19:40:58.120064  565798 logs.go:284] No container was found matching "coredns"
	I1027 19:40:58.120074  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:40:58.120150  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:40:58.150017  565798 cri.go:89] found id: "15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:40:58.150043  565798 cri.go:89] found id: ""
	I1027 19:40:58.150052  565798 logs.go:282] 1 containers: [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8]
	I1027 19:40:58.150108  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:40:58.154647  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:40:58.154712  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:40:58.183854  565798 cri.go:89] found id: ""
	I1027 19:40:58.183879  565798 logs.go:282] 0 containers: []
	W1027 19:40:58.183888  565798 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:40:58.183894  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:40:58.183943  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:40:58.212083  565798 cri.go:89] found id: "38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:40:58.212102  565798 cri.go:89] found id: "df060ac929bc7a5dac337c7e85e10b2f4a51413be70b8202c8307826c4a72947"
	I1027 19:40:58.212106  565798 cri.go:89] found id: ""
	I1027 19:40:58.212114  565798 logs.go:282] 2 containers: [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77 df060ac929bc7a5dac337c7e85e10b2f4a51413be70b8202c8307826c4a72947]
	I1027 19:40:58.212185  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:40:58.216480  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:40:58.220450  565798 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:40:58.220522  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:40:58.249431  565798 cri.go:89] found id: ""
	I1027 19:40:58.249455  565798 logs.go:282] 0 containers: []
	W1027 19:40:58.249463  565798 logs.go:284] No container was found matching "kindnet"
	I1027 19:40:58.249469  565798 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:40:58.249515  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:40:58.278301  565798 cri.go:89] found id: ""
	I1027 19:40:58.278327  565798 logs.go:282] 0 containers: []
	W1027 19:40:58.278334  565798 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:40:58.278352  565798 logs.go:123] Gathering logs for kubelet ...
	I1027 19:40:58.278366  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:40:58.361232  565798 logs.go:123] Gathering logs for dmesg ...
	I1027 19:40:58.361276  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:40:58.384714  565798 logs.go:123] Gathering logs for kube-controller-manager [df060ac929bc7a5dac337c7e85e10b2f4a51413be70b8202c8307826c4a72947] ...
	I1027 19:40:58.384753  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 df060ac929bc7a5dac337c7e85e10b2f4a51413be70b8202c8307826c4a72947"
	I1027 19:40:58.415348  565798 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:40:58.415382  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:40:58.463651  565798 logs.go:123] Gathering logs for container status ...
	I1027 19:40:58.463690  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:40:58.498078  565798 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:40:58.498125  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:40:58.558995  565798 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:40:58.559018  565798 logs.go:123] Gathering logs for kube-apiserver [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8] ...
	I1027 19:40:58.559035  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:40:58.594584  565798 logs.go:123] Gathering logs for kube-scheduler [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8] ...
	I1027 19:40:58.594625  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:40:58.645514  565798 logs.go:123] Gathering logs for kube-controller-manager [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77] ...
	I1027 19:40:58.645551  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:01.178225  565798 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1027 19:41:01.178694  565798 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1027 19:41:01.178745  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:41:01.178791  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:41:01.210901  565798 cri.go:89] found id: "f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:41:01.210925  565798 cri.go:89] found id: ""
	I1027 19:41:01.210936  565798 logs.go:282] 1 containers: [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8]
	I1027 19:41:01.211006  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:01.215571  565798 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:41:01.215658  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:41:01.247466  565798 cri.go:89] found id: ""
	I1027 19:41:01.247503  565798 logs.go:282] 0 containers: []
	W1027 19:41:01.247514  565798 logs.go:284] No container was found matching "etcd"
	I1027 19:41:01.247523  565798 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:41:01.247591  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:41:01.281986  565798 cri.go:89] found id: ""
	I1027 19:41:01.282024  565798 logs.go:282] 0 containers: []
	W1027 19:41:01.282036  565798 logs.go:284] No container was found matching "coredns"
	I1027 19:41:01.282044  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:41:01.282106  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:41:01.312897  565798 cri.go:89] found id: "15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:41:01.312929  565798 cri.go:89] found id: ""
	I1027 19:41:01.312940  565798 logs.go:282] 1 containers: [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8]
	I1027 19:41:01.313010  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:01.317732  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:41:01.317823  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:41:01.349672  565798 cri.go:89] found id: ""
	I1027 19:41:01.349702  565798 logs.go:282] 0 containers: []
	W1027 19:41:01.349714  565798 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:41:01.349722  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:41:01.349783  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:41:01.383805  565798 cri.go:89] found id: "38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:01.383830  565798 cri.go:89] found id: ""
	I1027 19:41:01.383842  565798 logs.go:282] 1 containers: [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77]
	I1027 19:41:01.383906  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:01.388901  565798 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:41:01.388976  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:41:01.421041  565798 cri.go:89] found id: ""
	I1027 19:41:01.421066  565798 logs.go:282] 0 containers: []
	W1027 19:41:01.421074  565798 logs.go:284] No container was found matching "kindnet"
	I1027 19:41:01.421082  565798 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:41:01.421184  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:41:01.451707  565798 cri.go:89] found id: ""
	I1027 19:41:01.451736  565798 logs.go:282] 0 containers: []
	W1027 19:41:01.451744  565798 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:41:01.451754  565798 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:41:01.451766  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:41:01.510573  565798 logs.go:123] Gathering logs for container status ...
	I1027 19:41:01.510618  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1027 19:41:00.819934  585556 node_ready.go:57] node "no-preload-095885" has "Ready":"False" status (will retry)
	I1027 19:41:02.819169  585556 node_ready.go:49] node "no-preload-095885" is "Ready"
	I1027 19:41:02.819209  585556 node_ready.go:38] duration metric: took 13.003808085s for node "no-preload-095885" to be "Ready" ...
	I1027 19:41:02.819229  585556 api_server.go:52] waiting for apiserver process to appear ...
	I1027 19:41:02.819306  585556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:41:02.833188  585556 api_server.go:72] duration metric: took 13.35947841s to wait for apiserver process to appear ...
	I1027 19:41:02.833220  585556 api_server.go:88] waiting for apiserver healthz status ...
	I1027 19:41:02.833241  585556 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 19:41:02.838750  585556 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1027 19:41:02.839890  585556 api_server.go:141] control plane version: v1.34.1
	I1027 19:41:02.839920  585556 api_server.go:131] duration metric: took 6.693245ms to wait for apiserver health ...
	I1027 19:41:02.839930  585556 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 19:41:02.843755  585556 system_pods.go:59] 8 kube-system pods found
	I1027 19:41:02.843791  585556 system_pods.go:61] "coredns-66bc5c9577-gwqvg" [3bcd75c1-f42f-4252-b1fc-2bdab3c8373e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 19:41:02.843797  585556 system_pods.go:61] "etcd-no-preload-095885" [398272ac-d5cc-44d6-bf2a-3469d316b417] Running
	I1027 19:41:02.843803  585556 system_pods.go:61] "kindnet-8lbz5" [42b05fb3-87d3-412f-ac73-cb73a737aab1] Running
	I1027 19:41:02.843807  585556 system_pods.go:61] "kube-apiserver-no-preload-095885" [d609db88-4097-43b5-b881-a445344edf64] Running
	I1027 19:41:02.843811  585556 system_pods.go:61] "kube-controller-manager-no-preload-095885" [b1bfd486-ed1f-4f8b-a08b-de7739f1dd9c] Running
	I1027 19:41:02.843814  585556 system_pods.go:61] "kube-proxy-wz64m" [339cb07c-5319-4d8b-ab61-a6d377c2bc61] Running
	I1027 19:41:02.843817  585556 system_pods.go:61] "kube-scheduler-no-preload-095885" [7ba1709a-c913-40f3-833b-bee63057ce6e] Running
	I1027 19:41:02.843822  585556 system_pods.go:61] "storage-provisioner" [e8283562-be98-444b-b591-a0239860e729] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 19:41:02.843829  585556 system_pods.go:74] duration metric: took 3.89196ms to wait for pod list to return data ...
	I1027 19:41:02.843841  585556 default_sa.go:34] waiting for default service account to be created ...
	I1027 19:41:02.846583  585556 default_sa.go:45] found service account: "default"
	I1027 19:41:02.846611  585556 default_sa.go:55] duration metric: took 2.763753ms for default service account to be created ...
	I1027 19:41:02.846622  585556 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 19:41:02.849879  585556 system_pods.go:86] 8 kube-system pods found
	I1027 19:41:02.849914  585556 system_pods.go:89] "coredns-66bc5c9577-gwqvg" [3bcd75c1-f42f-4252-b1fc-2bdab3c8373e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 19:41:02.849920  585556 system_pods.go:89] "etcd-no-preload-095885" [398272ac-d5cc-44d6-bf2a-3469d316b417] Running
	I1027 19:41:02.849926  585556 system_pods.go:89] "kindnet-8lbz5" [42b05fb3-87d3-412f-ac73-cb73a737aab1] Running
	I1027 19:41:02.849930  585556 system_pods.go:89] "kube-apiserver-no-preload-095885" [d609db88-4097-43b5-b881-a445344edf64] Running
	I1027 19:41:02.849935  585556 system_pods.go:89] "kube-controller-manager-no-preload-095885" [b1bfd486-ed1f-4f8b-a08b-de7739f1dd9c] Running
	I1027 19:41:02.849938  585556 system_pods.go:89] "kube-proxy-wz64m" [339cb07c-5319-4d8b-ab61-a6d377c2bc61] Running
	I1027 19:41:02.849942  585556 system_pods.go:89] "kube-scheduler-no-preload-095885" [7ba1709a-c913-40f3-833b-bee63057ce6e] Running
	I1027 19:41:02.849946  585556 system_pods.go:89] "storage-provisioner" [e8283562-be98-444b-b591-a0239860e729] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 19:41:02.849981  585556 retry.go:31] will retry after 208.530125ms: missing components: kube-dns
	I1027 19:41:03.063213  585556 system_pods.go:86] 8 kube-system pods found
	I1027 19:41:03.063246  585556 system_pods.go:89] "coredns-66bc5c9577-gwqvg" [3bcd75c1-f42f-4252-b1fc-2bdab3c8373e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 19:41:03.063252  585556 system_pods.go:89] "etcd-no-preload-095885" [398272ac-d5cc-44d6-bf2a-3469d316b417] Running
	I1027 19:41:03.063259  585556 system_pods.go:89] "kindnet-8lbz5" [42b05fb3-87d3-412f-ac73-cb73a737aab1] Running
	I1027 19:41:03.063269  585556 system_pods.go:89] "kube-apiserver-no-preload-095885" [d609db88-4097-43b5-b881-a445344edf64] Running
	I1027 19:41:03.063273  585556 system_pods.go:89] "kube-controller-manager-no-preload-095885" [b1bfd486-ed1f-4f8b-a08b-de7739f1dd9c] Running
	I1027 19:41:03.063277  585556 system_pods.go:89] "kube-proxy-wz64m" [339cb07c-5319-4d8b-ab61-a6d377c2bc61] Running
	I1027 19:41:03.063283  585556 system_pods.go:89] "kube-scheduler-no-preload-095885" [7ba1709a-c913-40f3-833b-bee63057ce6e] Running
	I1027 19:41:03.063290  585556 system_pods.go:89] "storage-provisioner" [e8283562-be98-444b-b591-a0239860e729] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 19:41:03.063312  585556 retry.go:31] will retry after 387.065987ms: missing components: kube-dns
	I1027 19:41:03.454191  585556 system_pods.go:86] 8 kube-system pods found
	I1027 19:41:03.454223  585556 system_pods.go:89] "coredns-66bc5c9577-gwqvg" [3bcd75c1-f42f-4252-b1fc-2bdab3c8373e] Running
	I1027 19:41:03.454229  585556 system_pods.go:89] "etcd-no-preload-095885" [398272ac-d5cc-44d6-bf2a-3469d316b417] Running
	I1027 19:41:03.454233  585556 system_pods.go:89] "kindnet-8lbz5" [42b05fb3-87d3-412f-ac73-cb73a737aab1] Running
	I1027 19:41:03.454236  585556 system_pods.go:89] "kube-apiserver-no-preload-095885" [d609db88-4097-43b5-b881-a445344edf64] Running
	I1027 19:41:03.454241  585556 system_pods.go:89] "kube-controller-manager-no-preload-095885" [b1bfd486-ed1f-4f8b-a08b-de7739f1dd9c] Running
	I1027 19:41:03.454244  585556 system_pods.go:89] "kube-proxy-wz64m" [339cb07c-5319-4d8b-ab61-a6d377c2bc61] Running
	I1027 19:41:03.454248  585556 system_pods.go:89] "kube-scheduler-no-preload-095885" [7ba1709a-c913-40f3-833b-bee63057ce6e] Running
	I1027 19:41:03.454251  585556 system_pods.go:89] "storage-provisioner" [e8283562-be98-444b-b591-a0239860e729] Running
	I1027 19:41:03.454261  585556 system_pods.go:126] duration metric: took 607.631414ms to wait for k8s-apps to be running ...
	I1027 19:41:03.454271  585556 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 19:41:03.454342  585556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:41:03.469661  585556 system_svc.go:56] duration metric: took 15.375165ms WaitForService to wait for kubelet
	I1027 19:41:03.469692  585556 kubeadm.go:586] duration metric: took 13.995993942s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 19:41:03.469713  585556 node_conditions.go:102] verifying NodePressure condition ...
	I1027 19:41:03.473051  585556 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1027 19:41:03.473084  585556 node_conditions.go:123] node cpu capacity is 8
	I1027 19:41:03.473098  585556 node_conditions.go:105] duration metric: took 3.378892ms to run NodePressure ...
	I1027 19:41:03.473110  585556 start.go:241] waiting for startup goroutines ...
	I1027 19:41:03.473116  585556 start.go:246] waiting for cluster config update ...
	I1027 19:41:03.473127  585556 start.go:255] writing updated cluster config ...
	I1027 19:41:03.473547  585556 ssh_runner.go:195] Run: rm -f paused
	I1027 19:41:03.478479  585556 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 19:41:03.482432  585556 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gwqvg" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:03.487649  585556 pod_ready.go:94] pod "coredns-66bc5c9577-gwqvg" is "Ready"
	I1027 19:41:03.487680  585556 pod_ready.go:86] duration metric: took 5.219183ms for pod "coredns-66bc5c9577-gwqvg" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:03.489989  585556 pod_ready.go:83] waiting for pod "etcd-no-preload-095885" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:03.494299  585556 pod_ready.go:94] pod "etcd-no-preload-095885" is "Ready"
	I1027 19:41:03.494327  585556 pod_ready.go:86] duration metric: took 4.312641ms for pod "etcd-no-preload-095885" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:03.496451  585556 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-095885" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:03.500973  585556 pod_ready.go:94] pod "kube-apiserver-no-preload-095885" is "Ready"
	I1027 19:41:03.501001  585556 pod_ready.go:86] duration metric: took 4.521998ms for pod "kube-apiserver-no-preload-095885" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:03.503226  585556 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-095885" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:03.883037  585556 pod_ready.go:94] pod "kube-controller-manager-no-preload-095885" is "Ready"
	I1027 19:41:03.883068  585556 pod_ready.go:86] duration metric: took 379.813717ms for pod "kube-controller-manager-no-preload-095885" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:04.083654  585556 pod_ready.go:83] waiting for pod "kube-proxy-wz64m" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:04.482474  585556 pod_ready.go:94] pod "kube-proxy-wz64m" is "Ready"
	I1027 19:41:04.482513  585556 pod_ready.go:86] duration metric: took 398.821516ms for pod "kube-proxy-wz64m" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:04.682931  585556 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-095885" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:05.082246  585556 pod_ready.go:94] pod "kube-scheduler-no-preload-095885" is "Ready"
	I1027 19:41:05.082304  585556 pod_ready.go:86] duration metric: took 399.325532ms for pod "kube-scheduler-no-preload-095885" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:05.082322  585556 pod_ready.go:40] duration metric: took 1.603803236s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 19:41:05.130054  585556 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 19:41:05.132095  585556 out.go:179] * Done! kubectl is now configured to use "no-preload-095885" cluster and "default" namespace by default
	I1027 19:41:01.066520  594803 out.go:252] * Restarting existing docker container for "embed-certs-919237" ...
	I1027 19:41:01.066614  594803 cli_runner.go:164] Run: docker start embed-certs-919237
	I1027 19:41:01.345192  594803 cli_runner.go:164] Run: docker container inspect embed-certs-919237 --format={{.State.Status}}
	I1027 19:41:01.367723  594803 kic.go:430] container "embed-certs-919237" state is running.
	I1027 19:41:01.368113  594803 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-919237
	I1027 19:41:01.390202  594803 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/embed-certs-919237/config.json ...
	I1027 19:41:01.390514  594803 machine.go:93] provisionDockerMachine start ...
	I1027 19:41:01.390591  594803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-919237
	I1027 19:41:01.413027  594803 main.go:141] libmachine: Using SSH client type: native
	I1027 19:41:01.413398  594803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33445 <nil> <nil>}
	I1027 19:41:01.413418  594803 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 19:41:01.414196  594803 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47452->127.0.0.1:33445: read: connection reset by peer
	I1027 19:41:04.563874  594803 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-919237
	
	I1027 19:41:04.563910  594803 ubuntu.go:182] provisioning hostname "embed-certs-919237"
	I1027 19:41:04.563984  594803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-919237
	I1027 19:41:04.585857  594803 main.go:141] libmachine: Using SSH client type: native
	I1027 19:41:04.586108  594803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33445 <nil> <nil>}
	I1027 19:41:04.586127  594803 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-919237 && echo "embed-certs-919237" | sudo tee /etc/hostname
	I1027 19:41:04.745340  594803 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-919237
	
	I1027 19:41:04.745465  594803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-919237
	I1027 19:41:04.769321  594803 main.go:141] libmachine: Using SSH client type: native
	I1027 19:41:04.769548  594803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33445 <nil> <nil>}
	I1027 19:41:04.769566  594803 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-919237' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-919237/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-919237' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 19:41:04.920012  594803 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 19:41:04.920046  594803 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21801-352833/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-352833/.minikube}
	I1027 19:41:04.920074  594803 ubuntu.go:190] setting up certificates
	I1027 19:41:04.920094  594803 provision.go:84] configureAuth start
	I1027 19:41:04.920183  594803 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-919237
	I1027 19:41:04.943841  594803 provision.go:143] copyHostCerts
	I1027 19:41:04.943927  594803 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem, removing ...
	I1027 19:41:04.943948  594803 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem
	I1027 19:41:04.944028  594803 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem (1123 bytes)
	I1027 19:41:04.944239  594803 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem, removing ...
	I1027 19:41:04.944257  594803 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem
	I1027 19:41:04.944296  594803 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem (1679 bytes)
	I1027 19:41:04.944383  594803 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem, removing ...
	I1027 19:41:04.944395  594803 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem
	I1027 19:41:04.944423  594803 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem (1078 bytes)
	I1027 19:41:04.944475  594803 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem org=jenkins.embed-certs-919237 san=[127.0.0.1 192.168.94.2 embed-certs-919237 localhost minikube]
	I1027 19:41:05.155892  594803 provision.go:177] copyRemoteCerts
	I1027 19:41:05.155953  594803 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 19:41:05.156001  594803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-919237
	I1027 19:41:05.177871  594803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/embed-certs-919237/id_rsa Username:docker}
	I1027 19:41:05.283397  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 19:41:05.303860  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1027 19:41:05.323928  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 19:41:05.343816  594803 provision.go:87] duration metric: took 423.704232ms to configureAuth
	I1027 19:41:05.343849  594803 ubuntu.go:206] setting minikube options for container-runtime
	I1027 19:41:05.344062  594803 config.go:182] Loaded profile config "embed-certs-919237": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:41:05.344270  594803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-919237
	I1027 19:41:05.364828  594803 main.go:141] libmachine: Using SSH client type: native
	I1027 19:41:05.365067  594803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33445 <nil> <nil>}
	I1027 19:41:05.365089  594803 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 19:41:05.683089  594803 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 19:41:05.683117  594803 machine.go:96] duration metric: took 4.292583564s to provisionDockerMachine
	I1027 19:41:05.683160  594803 start.go:293] postStartSetup for "embed-certs-919237" (driver="docker")
	I1027 19:41:05.683178  594803 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 19:41:05.683251  594803 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 19:41:05.683341  594803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-919237
	I1027 19:41:05.704409  594803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/embed-certs-919237/id_rsa Username:docker}
	I1027 19:41:05.808620  594803 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 19:41:05.812844  594803 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 19:41:05.812879  594803 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 19:41:05.812891  594803 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-352833/.minikube/addons for local assets ...
	I1027 19:41:05.812957  594803 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-352833/.minikube/files for local assets ...
	I1027 19:41:05.813078  594803 filesync.go:149] local asset: /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem -> 3564152.pem in /etc/ssl/certs
	I1027 19:41:05.813222  594803 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 19:41:01.544316  565798 logs.go:123] Gathering logs for kubelet ...
	I1027 19:41:01.544346  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:41:01.659317  565798 logs.go:123] Gathering logs for dmesg ...
	I1027 19:41:01.659359  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:41:01.686121  565798 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:41:01.686169  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:41:01.747842  565798 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:41:01.747864  565798 logs.go:123] Gathering logs for kube-apiserver [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8] ...
	I1027 19:41:01.747878  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:41:01.793564  565798 logs.go:123] Gathering logs for kube-scheduler [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8] ...
	I1027 19:41:01.793605  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:41:01.845488  565798 logs.go:123] Gathering logs for kube-controller-manager [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77] ...
	I1027 19:41:01.845527  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:04.376444  565798 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1027 19:41:04.376990  565798 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1027 19:41:04.377046  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:41:04.377099  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:41:04.406829  565798 cri.go:89] found id: "f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:41:04.406851  565798 cri.go:89] found id: ""
	I1027 19:41:04.406859  565798 logs.go:282] 1 containers: [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8]
	I1027 19:41:04.406918  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:04.411348  565798 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:41:04.411426  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:41:04.443060  565798 cri.go:89] found id: ""
	I1027 19:41:04.443094  565798 logs.go:282] 0 containers: []
	W1027 19:41:04.443105  565798 logs.go:284] No container was found matching "etcd"
	I1027 19:41:04.443113  565798 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:41:04.443223  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:41:04.475252  565798 cri.go:89] found id: ""
	I1027 19:41:04.475280  565798 logs.go:282] 0 containers: []
	W1027 19:41:04.475288  565798 logs.go:284] No container was found matching "coredns"
	I1027 19:41:04.475295  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:41:04.475358  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:41:04.506592  565798 cri.go:89] found id: "15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:41:04.506613  565798 cri.go:89] found id: ""
	I1027 19:41:04.506622  565798 logs.go:282] 1 containers: [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8]
	I1027 19:41:04.506674  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:04.511168  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:41:04.511243  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:41:04.541392  565798 cri.go:89] found id: ""
	I1027 19:41:04.541418  565798 logs.go:282] 0 containers: []
	W1027 19:41:04.541425  565798 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:41:04.541432  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:41:04.541484  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:41:04.572329  565798 cri.go:89] found id: "38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:04.572361  565798 cri.go:89] found id: ""
	I1027 19:41:04.572370  565798 logs.go:282] 1 containers: [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77]
	I1027 19:41:04.572429  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:04.577195  565798 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:41:04.577270  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:41:04.608128  565798 cri.go:89] found id: ""
	I1027 19:41:04.608182  565798 logs.go:282] 0 containers: []
	W1027 19:41:04.608192  565798 logs.go:284] No container was found matching "kindnet"
	I1027 19:41:04.608199  565798 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:41:04.608266  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:41:04.638970  565798 cri.go:89] found id: ""
	I1027 19:41:04.639004  565798 logs.go:282] 0 containers: []
	W1027 19:41:04.639017  565798 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:41:04.639029  565798 logs.go:123] Gathering logs for kube-apiserver [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8] ...
	I1027 19:41:04.639047  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:41:04.676026  565798 logs.go:123] Gathering logs for kube-scheduler [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8] ...
	I1027 19:41:04.676066  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:41:04.729477  565798 logs.go:123] Gathering logs for kube-controller-manager [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77] ...
	I1027 19:41:04.729522  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:04.763334  565798 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:41:04.763366  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:41:04.814559  565798 logs.go:123] Gathering logs for container status ...
	I1027 19:41:04.814597  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:41:04.850968  565798 logs.go:123] Gathering logs for kubelet ...
	I1027 19:41:04.851011  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:41:04.944394  565798 logs.go:123] Gathering logs for dmesg ...
	I1027 19:41:04.944431  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:41:04.966811  565798 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:41:04.966851  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:41:05.028358  565798 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:41:05.821887  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem --> /etc/ssl/certs/3564152.pem (1708 bytes)
	I1027 19:41:05.841205  594803 start.go:296] duration metric: took 158.022167ms for postStartSetup
	I1027 19:41:05.841329  594803 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 19:41:05.841428  594803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-919237
	I1027 19:41:05.862221  594803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/embed-certs-919237/id_rsa Username:docker}
	I1027 19:41:05.962951  594803 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 19:41:05.968053  594803 fix.go:56] duration metric: took 4.924088468s for fixHost
	I1027 19:41:05.968084  594803 start.go:83] releasing machines lock for "embed-certs-919237", held for 4.924145002s
	I1027 19:41:05.968196  594803 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-919237
	I1027 19:41:05.987613  594803 ssh_runner.go:195] Run: cat /version.json
	I1027 19:41:05.987669  594803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-919237
	I1027 19:41:05.987702  594803 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 19:41:05.987789  594803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-919237
	I1027 19:41:06.007445  594803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/embed-certs-919237/id_rsa Username:docker}
	I1027 19:41:06.008274  594803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/embed-certs-919237/id_rsa Username:docker}
	I1027 19:41:06.171092  594803 ssh_runner.go:195] Run: systemctl --version
	I1027 19:41:06.179869  594803 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 19:41:06.219933  594803 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 19:41:06.225954  594803 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 19:41:06.226044  594803 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 19:41:06.236901  594803 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 19:41:06.236933  594803 start.go:495] detecting cgroup driver to use...
	I1027 19:41:06.236974  594803 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 19:41:06.237038  594803 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 19:41:06.256171  594803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 19:41:06.272267  594803 docker.go:218] disabling cri-docker service (if available) ...
	I1027 19:41:06.272335  594803 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 19:41:06.289493  594803 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 19:41:06.303711  594803 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 19:41:06.395451  594803 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 19:41:06.478021  594803 docker.go:234] disabling docker service ...
	I1027 19:41:06.478097  594803 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 19:41:06.493521  594803 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 19:41:06.507490  594803 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 19:41:06.591513  594803 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 19:41:06.682906  594803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 19:41:06.696885  594803 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 19:41:06.713250  594803 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 19:41:06.713378  594803 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:41:06.723697  594803 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 19:41:06.723794  594803 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:41:06.734257  594803 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:41:06.744505  594803 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:41:06.754791  594803 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 19:41:06.764454  594803 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:41:06.774849  594803 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:41:06.784515  594803 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:41:06.794832  594803 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 19:41:06.803521  594803 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 19:41:06.812405  594803 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:41:06.901080  594803 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 19:41:07.023003  594803 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 19:41:07.023077  594803 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 19:41:07.027729  594803 start.go:563] Will wait 60s for crictl version
	I1027 19:41:07.027821  594803 ssh_runner.go:195] Run: which crictl
	I1027 19:41:07.032087  594803 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 19:41:07.060453  594803 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 19:41:07.060549  594803 ssh_runner.go:195] Run: crio --version
	I1027 19:41:07.090930  594803 ssh_runner.go:195] Run: crio --version
	I1027 19:41:07.122696  594803 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 19:41:07.124057  594803 cli_runner.go:164] Run: docker network inspect embed-certs-919237 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 19:41:07.144121  594803 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1027 19:41:07.148817  594803 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 19:41:07.160514  594803 kubeadm.go:883] updating cluster {Name:embed-certs-919237 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-919237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 19:41:07.160677  594803 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:41:07.160758  594803 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 19:41:07.197268  594803 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 19:41:07.197294  594803 crio.go:433] Images already preloaded, skipping extraction
	I1027 19:41:07.197359  594803 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 19:41:07.224730  594803 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 19:41:07.224756  594803 cache_images.go:85] Images are preloaded, skipping loading
	I1027 19:41:07.224766  594803 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1027 19:41:07.224884  594803 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-919237 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-919237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 19:41:07.224966  594803 ssh_runner.go:195] Run: crio config
	I1027 19:41:07.273364  594803 cni.go:84] Creating CNI manager for ""
	I1027 19:41:07.273386  594803 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:41:07.273406  594803 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 19:41:07.273446  594803 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-919237 NodeName:embed-certs-919237 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 19:41:07.273615  594803 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-919237"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 19:41:07.273713  594803 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 19:41:07.283551  594803 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 19:41:07.283671  594803 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 19:41:07.292711  594803 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1027 19:41:07.307484  594803 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 19:41:07.321800  594803 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1027 19:41:07.335251  594803 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1027 19:41:07.339362  594803 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 19:41:07.350244  594803 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:41:07.434349  594803 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:41:07.464970  594803 certs.go:69] Setting up /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/embed-certs-919237 for IP: 192.168.94.2
	I1027 19:41:07.464995  594803 certs.go:195] generating shared ca certs ...
	I1027 19:41:07.465020  594803 certs.go:227] acquiring lock for ca certs: {Name:mk4bdbca32068f6f817fc35fdc496e961dc3e0d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:41:07.465244  594803 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21801-352833/.minikube/ca.key
	I1027 19:41:07.465292  594803 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.key
	I1027 19:41:07.465304  594803 certs.go:257] generating profile certs ...
	I1027 19:41:07.465403  594803 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/embed-certs-919237/client.key
	I1027 19:41:07.465450  594803 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/embed-certs-919237/apiserver.key.3faa2aa5
	I1027 19:41:07.465488  594803 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/embed-certs-919237/proxy-client.key
	I1027 19:41:07.465591  594803 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/356415.pem (1338 bytes)
	W1027 19:41:07.465626  594803 certs.go:480] ignoring /home/jenkins/minikube-integration/21801-352833/.minikube/certs/356415_empty.pem, impossibly tiny 0 bytes
	I1027 19:41:07.465636  594803 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 19:41:07.465656  594803 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem (1078 bytes)
	I1027 19:41:07.465680  594803 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem (1123 bytes)
	I1027 19:41:07.465706  594803 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem (1679 bytes)
	I1027 19:41:07.465755  594803 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem (1708 bytes)
	I1027 19:41:07.466444  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 19:41:07.487514  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 19:41:07.509307  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 19:41:07.532458  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1027 19:41:07.564071  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/embed-certs-919237/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1027 19:41:07.586349  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/embed-certs-919237/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 19:41:07.606465  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/embed-certs-919237/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 19:41:07.627059  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/embed-certs-919237/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 19:41:07.648181  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 19:41:07.672545  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/certs/356415.pem --> /usr/share/ca-certificates/356415.pem (1338 bytes)
	I1027 19:41:07.693483  594803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem --> /usr/share/ca-certificates/3564152.pem (1708 bytes)
	I1027 19:41:07.715889  594803 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 19:41:07.732429  594803 ssh_runner.go:195] Run: openssl version
	I1027 19:41:07.740863  594803 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/356415.pem && ln -fs /usr/share/ca-certificates/356415.pem /etc/ssl/certs/356415.pem"
	I1027 19:41:07.751652  594803 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356415.pem
	I1027 19:41:07.756427  594803 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 19:02 /usr/share/ca-certificates/356415.pem
	I1027 19:41:07.756508  594803 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356415.pem
	I1027 19:41:07.796822  594803 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/356415.pem /etc/ssl/certs/51391683.0"
	I1027 19:41:07.807165  594803 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3564152.pem && ln -fs /usr/share/ca-certificates/3564152.pem /etc/ssl/certs/3564152.pem"
	I1027 19:41:07.817111  594803 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3564152.pem
	I1027 19:41:07.821699  594803 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 19:02 /usr/share/ca-certificates/3564152.pem
	I1027 19:41:07.821774  594803 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3564152.pem
	I1027 19:41:07.862104  594803 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3564152.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 19:41:07.872082  594803 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 19:41:07.882661  594803 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:41:07.888248  594803 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:41:07.888325  594803 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:41:07.927092  594803 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 19:41:07.936711  594803 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 19:41:07.941329  594803 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 19:41:07.982744  594803 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 19:41:08.036882  594803 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 19:41:08.086334  594803 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 19:41:08.146052  594803 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 19:41:08.191698  594803 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 19:41:08.228527  594803 kubeadm.go:400] StartCluster: {Name:embed-certs-919237 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-919237 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:41:08.228643  594803 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 19:41:08.228710  594803 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 19:41:08.261293  594803 cri.go:89] found id: "d5a5c65a74b4b0bac782941ddf5cfc5e1c95eb29dbc563a89bc74143a3d75be8"
	I1027 19:41:08.261319  594803 cri.go:89] found id: "f0dcb6f33c4a16c8aabf1c9522c219dfe57ce0438d6eedb8d11b3bbed06bf220"
	I1027 19:41:08.261324  594803 cri.go:89] found id: "d17bd312e4c2b6e68ce5e1c0006ad10d3d74b77c3bc3e8570e4526763c6914a9"
	I1027 19:41:08.261327  594803 cri.go:89] found id: "31682e1eceede1979fd31aa2e96a71541d29f7d036de012b0c0a406025482670"
	I1027 19:41:08.261344  594803 cri.go:89] found id: ""
	I1027 19:41:08.261398  594803 ssh_runner.go:195] Run: sudo runc list -f json
	W1027 19:41:08.275475  594803 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:41:08Z" level=error msg="open /run/runc: no such file or directory"
	I1027 19:41:08.275556  594803 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 19:41:08.285008  594803 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1027 19:41:08.285028  594803 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1027 19:41:08.285080  594803 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 19:41:08.292877  594803 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 19:41:08.293734  594803 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-919237" does not appear in /home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:41:08.294188  594803 kubeconfig.go:62] /home/jenkins/minikube-integration/21801-352833/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-919237" cluster setting kubeconfig missing "embed-certs-919237" context setting]
	I1027 19:41:08.294867  594803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/kubeconfig: {Name:mk24cbe512a6907c874f3fb7a85390a8f9fd2b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:41:08.296560  594803 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 19:41:08.304858  594803 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.94.2
	I1027 19:41:08.304893  594803 kubeadm.go:601] duration metric: took 19.857495ms to restartPrimaryControlPlane
	I1027 19:41:08.304904  594803 kubeadm.go:402] duration metric: took 76.392154ms to StartCluster
	I1027 19:41:08.304921  594803 settings.go:142] acquiring lock: {Name:mk8304c2106bf310642e0949fc0266ccb50f2f95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:41:08.304992  594803 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:41:08.306608  594803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/kubeconfig: {Name:mk24cbe512a6907c874f3fb7a85390a8f9fd2b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:41:08.306895  594803 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 19:41:08.306966  594803 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 19:41:08.307088  594803 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-919237"
	I1027 19:41:08.307112  594803 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-919237"
	W1027 19:41:08.307120  594803 addons.go:247] addon storage-provisioner should already be in state true
	I1027 19:41:08.307121  594803 addons.go:69] Setting dashboard=true in profile "embed-certs-919237"
	I1027 19:41:08.307180  594803 config.go:182] Loaded profile config "embed-certs-919237": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:41:08.307172  594803 addons.go:69] Setting default-storageclass=true in profile "embed-certs-919237"
	I1027 19:41:08.307206  594803 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-919237"
	I1027 19:41:08.307185  594803 host.go:66] Checking if "embed-certs-919237" exists ...
	I1027 19:41:08.307188  594803 addons.go:238] Setting addon dashboard=true in "embed-certs-919237"
	W1027 19:41:08.307376  594803 addons.go:247] addon dashboard should already be in state true
	I1027 19:41:08.307407  594803 host.go:66] Checking if "embed-certs-919237" exists ...
	I1027 19:41:08.307583  594803 cli_runner.go:164] Run: docker container inspect embed-certs-919237 --format={{.State.Status}}
	I1027 19:41:08.307745  594803 cli_runner.go:164] Run: docker container inspect embed-certs-919237 --format={{.State.Status}}
	I1027 19:41:08.307873  594803 cli_runner.go:164] Run: docker container inspect embed-certs-919237 --format={{.State.Status}}
	I1027 19:41:08.309349  594803 out.go:179] * Verifying Kubernetes components...
	I1027 19:41:08.310781  594803 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:41:08.336188  594803 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1027 19:41:08.336216  594803 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:41:08.336832  594803 addons.go:238] Setting addon default-storageclass=true in "embed-certs-919237"
	W1027 19:41:08.336855  594803 addons.go:247] addon default-storageclass should already be in state true
	I1027 19:41:08.336886  594803 host.go:66] Checking if "embed-certs-919237" exists ...
	I1027 19:41:08.337405  594803 cli_runner.go:164] Run: docker container inspect embed-certs-919237 --format={{.State.Status}}
	I1027 19:41:08.337895  594803 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:41:08.337913  594803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 19:41:08.337970  594803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-919237
	I1027 19:41:08.339243  594803 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1027 19:41:08.340863  594803 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1027 19:41:08.340892  594803 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1027 19:41:08.340959  594803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-919237
	I1027 19:41:08.371713  594803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/embed-certs-919237/id_rsa Username:docker}
	I1027 19:41:08.378869  594803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/embed-certs-919237/id_rsa Username:docker}
	I1027 19:41:08.379420  594803 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 19:41:08.379443  594803 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 19:41:08.379523  594803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-919237
	I1027 19:41:08.404654  594803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/embed-certs-919237/id_rsa Username:docker}
	I1027 19:41:08.459858  594803 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:41:08.474523  594803 node_ready.go:35] waiting up to 6m0s for node "embed-certs-919237" to be "Ready" ...
	I1027 19:41:08.494692  594803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:41:08.501377  594803 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1027 19:41:08.501402  594803 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1027 19:41:08.517164  594803 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1027 19:41:08.517189  594803 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1027 19:41:08.528162  594803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 19:41:08.536218  594803 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1027 19:41:08.536248  594803 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1027 19:41:08.555432  594803 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1027 19:41:08.555459  594803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1027 19:41:08.577695  594803 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1027 19:41:08.577726  594803 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1027 19:41:08.596623  594803 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1027 19:41:08.596657  594803 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1027 19:41:08.612731  594803 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1027 19:41:08.612763  594803 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1027 19:41:08.627030  594803 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1027 19:41:08.627060  594803 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1027 19:41:08.641348  594803 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 19:41:08.641379  594803 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1027 19:41:08.656654  594803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 19:41:09.985803  594803 node_ready.go:49] node "embed-certs-919237" is "Ready"
	I1027 19:41:09.985838  594803 node_ready.go:38] duration metric: took 1.511271197s for node "embed-certs-919237" to be "Ready" ...
	I1027 19:41:09.985856  594803 api_server.go:52] waiting for apiserver process to appear ...
	I1027 19:41:09.985916  594803 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:41:10.512525  594803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.017790889s)
	I1027 19:41:10.512570  594803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.984382968s)
	I1027 19:41:10.512737  594803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.856029763s)
	I1027 19:41:10.512758  594803 api_server.go:72] duration metric: took 2.205827226s to wait for apiserver process to appear ...
	I1027 19:41:10.512770  594803 api_server.go:88] waiting for apiserver healthz status ...
	I1027 19:41:10.512790  594803 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1027 19:41:10.514667  594803 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-919237 addons enable metrics-server
	
	I1027 19:41:10.519068  594803 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 19:41:10.519098  594803 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 19:41:10.525420  594803 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1027 19:41:10.526779  594803 addons.go:514] duration metric: took 2.219821783s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1027 19:41:07.528527  565798 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1027 19:41:07.529038  565798 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1027 19:41:07.529097  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:41:07.529167  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:41:07.570906  565798 cri.go:89] found id: "f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:41:07.570937  565798 cri.go:89] found id: ""
	I1027 19:41:07.570949  565798 logs.go:282] 1 containers: [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8]
	I1027 19:41:07.571019  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:07.575599  565798 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:41:07.575660  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:41:07.605990  565798 cri.go:89] found id: ""
	I1027 19:41:07.606014  565798 logs.go:282] 0 containers: []
	W1027 19:41:07.606023  565798 logs.go:284] No container was found matching "etcd"
	I1027 19:41:07.606028  565798 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:41:07.606087  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:41:07.638584  565798 cri.go:89] found id: ""
	I1027 19:41:07.638610  565798 logs.go:282] 0 containers: []
	W1027 19:41:07.638619  565798 logs.go:284] No container was found matching "coredns"
	I1027 19:41:07.638626  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:41:07.638673  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:41:07.670909  565798 cri.go:89] found id: "15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:41:07.670935  565798 cri.go:89] found id: ""
	I1027 19:41:07.670946  565798 logs.go:282] 1 containers: [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8]
	I1027 19:41:07.671012  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:07.676493  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:41:07.676572  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:41:07.707704  565798 cri.go:89] found id: ""
	I1027 19:41:07.707730  565798 logs.go:282] 0 containers: []
	W1027 19:41:07.707738  565798 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:41:07.707744  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:41:07.707804  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:41:07.738631  565798 cri.go:89] found id: "38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:07.738651  565798 cri.go:89] found id: ""
	I1027 19:41:07.738663  565798 logs.go:282] 1 containers: [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77]
	I1027 19:41:07.738722  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:07.743367  565798 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:41:07.743451  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:41:07.775208  565798 cri.go:89] found id: ""
	I1027 19:41:07.775238  565798 logs.go:282] 0 containers: []
	W1027 19:41:07.775252  565798 logs.go:284] No container was found matching "kindnet"
	I1027 19:41:07.775261  565798 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:41:07.775339  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:41:07.805721  565798 cri.go:89] found id: ""
	I1027 19:41:07.805749  565798 logs.go:282] 0 containers: []
	W1027 19:41:07.805759  565798 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:41:07.805773  565798 logs.go:123] Gathering logs for dmesg ...
	I1027 19:41:07.805797  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:41:07.829611  565798 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:41:07.829647  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:41:07.894281  565798 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:41:07.894316  565798 logs.go:123] Gathering logs for kube-apiserver [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8] ...
	I1027 19:41:07.894338  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:41:07.930602  565798 logs.go:123] Gathering logs for kube-scheduler [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8] ...
	I1027 19:41:07.930636  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:41:07.985189  565798 logs.go:123] Gathering logs for kube-controller-manager [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77] ...
	I1027 19:41:07.985226  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:08.023545  565798 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:41:08.023578  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:41:08.093343  565798 logs.go:123] Gathering logs for container status ...
	I1027 19:41:08.093385  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:41:08.145553  565798 logs.go:123] Gathering logs for kubelet ...
	I1027 19:41:08.145592  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:41:10.748218  565798 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1027 19:41:10.748717  565798 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1027 19:41:10.748775  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:41:10.748830  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:41:10.778542  565798 cri.go:89] found id: "f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:41:10.778563  565798 cri.go:89] found id: ""
	I1027 19:41:10.778572  565798 logs.go:282] 1 containers: [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8]
	I1027 19:41:10.778626  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:10.782948  565798 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:41:10.783005  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:41:10.810590  565798 cri.go:89] found id: ""
	I1027 19:41:10.810619  565798 logs.go:282] 0 containers: []
	W1027 19:41:10.810631  565798 logs.go:284] No container was found matching "etcd"
	I1027 19:41:10.810642  565798 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:41:10.810705  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:41:10.841630  565798 cri.go:89] found id: ""
	I1027 19:41:10.841659  565798 logs.go:282] 0 containers: []
	W1027 19:41:10.841670  565798 logs.go:284] No container was found matching "coredns"
	I1027 19:41:10.841678  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:41:10.841747  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:41:10.881274  565798 cri.go:89] found id: "15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:41:10.881300  565798 cri.go:89] found id: ""
	I1027 19:41:10.881311  565798 logs.go:282] 1 containers: [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8]
	I1027 19:41:10.881370  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:10.886646  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:41:10.886736  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:41:10.929911  565798 cri.go:89] found id: ""
	I1027 19:41:10.929943  565798 logs.go:282] 0 containers: []
	W1027 19:41:10.929954  565798 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:41:10.929962  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:41:10.930024  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:41:10.968851  565798 cri.go:89] found id: "38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:10.968878  565798 cri.go:89] found id: ""
	I1027 19:41:10.968888  565798 logs.go:282] 1 containers: [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77]
	I1027 19:41:10.968948  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:10.974365  565798 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:41:10.974432  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:41:11.004971  565798 cri.go:89] found id: ""
	I1027 19:41:11.004997  565798 logs.go:282] 0 containers: []
	W1027 19:41:11.005005  565798 logs.go:284] No container was found matching "kindnet"
	I1027 19:41:11.005011  565798 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:41:11.005072  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:41:11.036769  565798 cri.go:89] found id: ""
	I1027 19:41:11.036802  565798 logs.go:282] 0 containers: []
	W1027 19:41:11.036814  565798 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:41:11.036827  565798 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:41:11.036845  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:41:11.109616  565798 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:41:11.109640  565798 logs.go:123] Gathering logs for kube-apiserver [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8] ...
	I1027 19:41:11.109659  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:41:11.149761  565798 logs.go:123] Gathering logs for kube-scheduler [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8] ...
	I1027 19:41:11.149808  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:41:11.209309  565798 logs.go:123] Gathering logs for kube-controller-manager [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77] ...
	I1027 19:41:11.209355  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:11.238293  565798 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:41:11.238330  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:41:11.290773  565798 logs.go:123] Gathering logs for container status ...
	I1027 19:41:11.290819  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:41:11.324791  565798 logs.go:123] Gathering logs for kubelet ...
	I1027 19:41:11.324821  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:41:11.416408  565798 logs.go:123] Gathering logs for dmesg ...
	I1027 19:41:11.416449  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	
	
	==> CRI-O <==
	Oct 27 19:41:02 no-preload-095885 crio[768]: time="2025-10-27T19:41:02.770642838Z" level=info msg="Started container" PID=2877 containerID=5fbaa6c952f3aaad752abd4e5894b107e4472dca8143d51cb1d0e0c647da7f04 description=kube-system/coredns-66bc5c9577-gwqvg/coredns id=f756b8f6-9e24-44db-9cd2-6a82010ce463 name=/runtime.v1.RuntimeService/StartContainer sandboxID=414613929a6d80a6e2b260e0a5457af87396ddf207a351612c9bfa7553b4d5f9
	Oct 27 19:41:02 no-preload-095885 crio[768]: time="2025-10-27T19:41:02.771030787Z" level=info msg="Started container" PID=2876 containerID=dbccfaf79ec70e170fb96e4a63d52b098b52bfd3f97c48253850def4e8e07291 description=kube-system/storage-provisioner/storage-provisioner id=ee650041-01c5-4956-a7e3-a715cc593f36 name=/runtime.v1.RuntimeService/StartContainer sandboxID=26b1061af4154313cd01b72e11832874312bc6e20bfb17192a0ad7978d97d247
	Oct 27 19:41:05 no-preload-095885 crio[768]: time="2025-10-27T19:41:05.583646756Z" level=info msg="Running pod sandbox: default/busybox/POD" id=9fd493d0-700c-4e57-a0b8-333b8c557257 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 19:41:05 no-preload-095885 crio[768]: time="2025-10-27T19:41:05.58383436Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:41:05 no-preload-095885 crio[768]: time="2025-10-27T19:41:05.592112Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:8a00a2d0497b519468a9975fb5d8d05ff423d94d5afe69516177595767334c85 UID:0b9552df-1e78-4109-bc0e-2632454d1b25 NetNS:/var/run/netns/ceb4ffb7-7b4d-44a6-bc41-a09c9a119db4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000682020}] Aliases:map[]}"
	Oct 27 19:41:05 no-preload-095885 crio[768]: time="2025-10-27T19:41:05.592167572Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 27 19:41:05 no-preload-095885 crio[768]: time="2025-10-27T19:41:05.604152622Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:8a00a2d0497b519468a9975fb5d8d05ff423d94d5afe69516177595767334c85 UID:0b9552df-1e78-4109-bc0e-2632454d1b25 NetNS:/var/run/netns/ceb4ffb7-7b4d-44a6-bc41-a09c9a119db4 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000682020}] Aliases:map[]}"
	Oct 27 19:41:05 no-preload-095885 crio[768]: time="2025-10-27T19:41:05.60434282Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 27 19:41:05 no-preload-095885 crio[768]: time="2025-10-27T19:41:05.605277668Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 27 19:41:05 no-preload-095885 crio[768]: time="2025-10-27T19:41:05.606153218Z" level=info msg="Ran pod sandbox 8a00a2d0497b519468a9975fb5d8d05ff423d94d5afe69516177595767334c85 with infra container: default/busybox/POD" id=9fd493d0-700c-4e57-a0b8-333b8c557257 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 19:41:05 no-preload-095885 crio[768]: time="2025-10-27T19:41:05.607481157Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=290283e7-d0c9-41da-92aa-9eda7fc5f9d2 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:41:05 no-preload-095885 crio[768]: time="2025-10-27T19:41:05.607609078Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=290283e7-d0c9-41da-92aa-9eda7fc5f9d2 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:41:05 no-preload-095885 crio[768]: time="2025-10-27T19:41:05.607659035Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=290283e7-d0c9-41da-92aa-9eda7fc5f9d2 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:41:05 no-preload-095885 crio[768]: time="2025-10-27T19:41:05.608239585Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=fa3da24f-f9e3-4d56-8e54-625ef2a1a7e9 name=/runtime.v1.ImageService/PullImage
	Oct 27 19:41:05 no-preload-095885 crio[768]: time="2025-10-27T19:41:05.609670855Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 27 19:41:06 no-preload-095885 crio[768]: time="2025-10-27T19:41:06.30863073Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=fa3da24f-f9e3-4d56-8e54-625ef2a1a7e9 name=/runtime.v1.ImageService/PullImage
	Oct 27 19:41:06 no-preload-095885 crio[768]: time="2025-10-27T19:41:06.309319435Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a0696364-41d7-45f3-98c9-a7fc095476ab name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:41:06 no-preload-095885 crio[768]: time="2025-10-27T19:41:06.310835106Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=29b5b691-9127-436b-b3a0-d5b386c0c6cb name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:41:06 no-preload-095885 crio[768]: time="2025-10-27T19:41:06.315446475Z" level=info msg="Creating container: default/busybox/busybox" id=c3379af1-d1a2-454a-bbc5-60adf0badebf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:41:06 no-preload-095885 crio[768]: time="2025-10-27T19:41:06.315603482Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:41:06 no-preload-095885 crio[768]: time="2025-10-27T19:41:06.319999524Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:41:06 no-preload-095885 crio[768]: time="2025-10-27T19:41:06.320562857Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:41:06 no-preload-095885 crio[768]: time="2025-10-27T19:41:06.364461156Z" level=info msg="Created container 4a6b4b5b146950de3bf0329cefd9eedd25a22b8e0d8f48f794a5ea87c0099cb2: default/busybox/busybox" id=c3379af1-d1a2-454a-bbc5-60adf0badebf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:41:06 no-preload-095885 crio[768]: time="2025-10-27T19:41:06.365199638Z" level=info msg="Starting container: 4a6b4b5b146950de3bf0329cefd9eedd25a22b8e0d8f48f794a5ea87c0099cb2" id=930a886c-d852-49b0-9e26-ec99aff54a3f name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:41:06 no-preload-095885 crio[768]: time="2025-10-27T19:41:06.367296849Z" level=info msg="Started container" PID=2954 containerID=4a6b4b5b146950de3bf0329cefd9eedd25a22b8e0d8f48f794a5ea87c0099cb2 description=default/busybox/busybox id=930a886c-d852-49b0-9e26-ec99aff54a3f name=/runtime.v1.RuntimeService/StartContainer sandboxID=8a00a2d0497b519468a9975fb5d8d05ff423d94d5afe69516177595767334c85
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	4a6b4b5b14695       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   8a00a2d0497b5       busybox                                     default
	5fbaa6c952f3a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   414613929a6d8       coredns-66bc5c9577-gwqvg                    kube-system
	dbccfaf79ec70       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   26b1061af4154       storage-provisioner                         kube-system
	fd88dde016233       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   50b1e49b07244       kindnet-8lbz5                               kube-system
	2a14d34ed7c40       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      25 seconds ago      Running             kube-proxy                0                   95a5efd4a95ad       kube-proxy-wz64m                            kube-system
	b7e9aa3e22aee       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      35 seconds ago      Running             kube-controller-manager   0                   7a3460c2c1919       kube-controller-manager-no-preload-095885   kube-system
	fbbb83077090f       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      35 seconds ago      Running             kube-scheduler            0                   45764b6b09e9c       kube-scheduler-no-preload-095885            kube-system
	3445345d96d99       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      35 seconds ago      Running             kube-apiserver            0                   45aa614d47f79       kube-apiserver-no-preload-095885            kube-system
	9945086fb9203       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      35 seconds ago      Running             etcd                      0                   cd49bb8022339       etcd-no-preload-095885                      kube-system
	
	
	==> coredns [5fbaa6c952f3aaad752abd4e5894b107e4472dca8143d51cb1d0e0c647da7f04] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60591 - 59926 "HINFO IN 7313567044394697112.6750741375742181530. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.064458272s
	
	
	==> describe nodes <==
	Name:               no-preload-095885
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-095885
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=no-preload-095885
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T19_40_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 19:40:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-095885
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 19:41:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 19:41:15 +0000   Mon, 27 Oct 2025 19:40:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 19:41:15 +0000   Mon, 27 Oct 2025 19:40:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 19:41:15 +0000   Mon, 27 Oct 2025 19:40:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 19:41:15 +0000   Mon, 27 Oct 2025 19:41:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-095885
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                71cd584e-1032-4c4b-a2da-7d2af7ed7a93
	  Boot ID:                    811bd29c-e64e-4acc-9427-bab1f7caed93
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-gwqvg                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-no-preload-095885                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-8lbz5                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-no-preload-095885             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-no-preload-095885    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-wz64m                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-no-preload-095885             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 31s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s   kubelet          Node no-preload-095885 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s   kubelet          Node no-preload-095885 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s   kubelet          Node no-preload-095885 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s   node-controller  Node no-preload-095885 event: Registered Node no-preload-095885 in Controller
	  Normal  NodeReady                13s   kubelet          Node no-preload-095885 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 23 52 43 9a ba 08 06
	[  +0.000398] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 50 95 0e df 53 08 06
	[Oct27 18:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.017295] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023893] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023882] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023851] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +2.047849] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +4.031592] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +8.319143] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[ +16.382183] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[Oct27 19:00] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	
	
	==> etcd [9945086fb920373fd22608a0e6ebafe97130eb3b519950fdbe04d59b2fbd48e1] <==
	{"level":"warn","ts":"2025-10-27T19:40:40.527079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:40.539353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:40.549056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:40.560914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:40.575522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:40.585397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:40.595504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:40.609229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:40.618984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:40.630716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:40.642177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:40.658558Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:40.671252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:40.684468Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:40.698934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:40.709678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39920","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:40.722122Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:40.735116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:40.748706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:40.759375Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40000","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:40.771411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:40.787444Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:40.797599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:40.808792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:40:40.887101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40084","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:41:15 up  2:23,  0 user,  load average: 2.61, 3.03, 2.01
	Linux no-preload-095885 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fd88dde016233c3212243392881590b8317ea875fcc485c5b9c030bfc82f87d8] <==
	I1027 19:40:52.032569       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 19:40:52.032947       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1027 19:40:52.033122       1 main.go:148] setting mtu 1500 for CNI 
	I1027 19:40:52.033157       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 19:40:52.033184       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T19:40:52Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 19:40:52.237489       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 19:40:52.237528       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 19:40:52.331472       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 19:40:52.332307       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1027 19:40:52.631735       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 19:40:52.631764       1 metrics.go:72] Registering metrics
	I1027 19:40:52.631815       1 controller.go:711] "Syncing nftables rules"
	I1027 19:41:02.240214       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 19:41:02.240285       1 main.go:301] handling current node
	I1027 19:41:12.241446       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 19:41:12.241483       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3445345d96d9994caed4718dee9927134b1d87d6201c369af768e0a5cd83edbc] <==
	I1027 19:40:41.558416       1 autoregister_controller.go:144] Starting autoregister controller
	I1027 19:40:41.558424       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 19:40:41.558433       1 cache.go:39] Caches are synced for autoregister controller
	I1027 19:40:41.559792       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 19:40:41.583448       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 19:40:41.610032       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 19:40:41.611873       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 19:40:42.448482       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1027 19:40:42.454823       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1027 19:40:42.454844       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 19:40:43.128788       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 19:40:43.174984       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 19:40:43.253520       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1027 19:40:43.260546       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1027 19:40:43.261921       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 19:40:43.267220       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 19:40:43.469011       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 19:40:44.475842       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 19:40:44.494507       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1027 19:40:44.503240       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1027 19:40:49.222250       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 19:40:49.228428       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 19:40:49.268340       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1027 19:40:49.319116       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1027 19:41:13.399329       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:40366: use of closed network connection
	
	
	==> kube-controller-manager [b7e9aa3e22aee5f9899fff08dd379af50a772277a25d98ce08232c97741adb84] <==
	I1027 19:40:48.448519       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1027 19:40:48.460970       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1027 19:40:48.465413       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1027 19:40:48.465480       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 19:40:48.465497       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 19:40:48.465508       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1027 19:40:48.465843       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1027 19:40:48.466829       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 19:40:48.466855       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1027 19:40:48.466853       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1027 19:40:48.466886       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1027 19:40:48.466900       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1027 19:40:48.467060       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1027 19:40:48.467105       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 19:40:48.466888       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 19:40:48.467190       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1027 19:40:48.467298       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 19:40:48.467385       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1027 19:40:48.467822       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 19:40:48.467935       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1027 19:40:48.469770       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 19:40:48.472090       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 19:40:48.474190       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 19:40:48.494737       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 19:41:03.418187       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [2a14d34ed7c40b035e03c997c01345ac632510fa66eca5cb3060a1815d040b3d] <==
	I1027 19:40:49.796162       1 server_linux.go:53] "Using iptables proxy"
	I1027 19:40:49.887525       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 19:40:49.988347       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 19:40:49.988394       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1027 19:40:49.988500       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 19:40:50.011528       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 19:40:50.011579       1 server_linux.go:132] "Using iptables Proxier"
	I1027 19:40:50.018157       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 19:40:50.018686       1 server.go:527] "Version info" version="v1.34.1"
	I1027 19:40:50.018727       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:40:50.020191       1 config.go:106] "Starting endpoint slice config controller"
	I1027 19:40:50.020647       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 19:40:50.020284       1 config.go:309] "Starting node config controller"
	I1027 19:40:50.020335       1 config.go:200] "Starting service config controller"
	I1027 19:40:50.020684       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 19:40:50.020691       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 19:40:50.020356       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 19:40:50.020693       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 19:40:50.020700       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 19:40:50.121702       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 19:40:50.121772       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 19:40:50.121797       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [fbbb83077090fcf78736174bacfda3b782edae94eac43025b2f42e74ab02e7bb] <==
	E1027 19:40:41.720700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1027 19:40:41.720709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 19:40:41.721247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 19:40:41.721581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 19:40:41.722245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 19:40:41.722326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 19:40:41.722421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 19:40:41.722826       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 19:40:41.723941       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 19:40:41.724049       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 19:40:41.724486       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 19:40:41.724953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 19:40:41.725071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1027 19:40:42.534403       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 19:40:42.555128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 19:40:42.620265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 19:40:42.644547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 19:40:42.720966       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 19:40:42.736366       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1027 19:40:42.784717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 19:40:42.789325       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 19:40:42.803315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 19:40:42.821216       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 19:40:42.856458       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1027 19:40:44.611862       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 19:40:45 no-preload-095885 kubelet[2276]: I1027 19:40:45.406101    2276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-095885" podStartSLOduration=1.40607984 podStartE2EDuration="1.40607984s" podCreationTimestamp="2025-10-27 19:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:40:45.392460674 +0000 UTC m=+1.159102933" watchObservedRunningTime="2025-10-27 19:40:45.40607984 +0000 UTC m=+1.172722094"
	Oct 27 19:40:45 no-preload-095885 kubelet[2276]: I1027 19:40:45.419951    2276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-095885" podStartSLOduration=1.419931055 podStartE2EDuration="1.419931055s" podCreationTimestamp="2025-10-27 19:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:40:45.406249417 +0000 UTC m=+1.172891677" watchObservedRunningTime="2025-10-27 19:40:45.419931055 +0000 UTC m=+1.186573315"
	Oct 27 19:40:45 no-preload-095885 kubelet[2276]: I1027 19:40:45.420075    2276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-095885" podStartSLOduration=1.420068065 podStartE2EDuration="1.420068065s" podCreationTimestamp="2025-10-27 19:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:40:45.419785415 +0000 UTC m=+1.186427677" watchObservedRunningTime="2025-10-27 19:40:45.420068065 +0000 UTC m=+1.186710393"
	Oct 27 19:40:48 no-preload-095885 kubelet[2276]: I1027 19:40:48.474187    2276 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 27 19:40:48 no-preload-095885 kubelet[2276]: I1027 19:40:48.475011    2276 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 27 19:40:49 no-preload-095885 kubelet[2276]: I1027 19:40:49.282599    2276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-095885" podStartSLOduration=6.282563379 podStartE2EDuration="6.282563379s" podCreationTimestamp="2025-10-27 19:40:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:40:45.434034968 +0000 UTC m=+1.200677228" watchObservedRunningTime="2025-10-27 19:40:49.282563379 +0000 UTC m=+5.049205638"
	Oct 27 19:40:49 no-preload-095885 kubelet[2276]: I1027 19:40:49.362732    2276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/42b05fb3-87d3-412f-ac73-cb73a737aab1-lib-modules\") pod \"kindnet-8lbz5\" (UID: \"42b05fb3-87d3-412f-ac73-cb73a737aab1\") " pod="kube-system/kindnet-8lbz5"
	Oct 27 19:40:49 no-preload-095885 kubelet[2276]: I1027 19:40:49.362783    2276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/339cb07c-5319-4d8b-ab61-a6d377c2bc61-xtables-lock\") pod \"kube-proxy-wz64m\" (UID: \"339cb07c-5319-4d8b-ab61-a6d377c2bc61\") " pod="kube-system/kube-proxy-wz64m"
	Oct 27 19:40:49 no-preload-095885 kubelet[2276]: I1027 19:40:49.362804    2276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2l65\" (UniqueName: \"kubernetes.io/projected/339cb07c-5319-4d8b-ab61-a6d377c2bc61-kube-api-access-h2l65\") pod \"kube-proxy-wz64m\" (UID: \"339cb07c-5319-4d8b-ab61-a6d377c2bc61\") " pod="kube-system/kube-proxy-wz64m"
	Oct 27 19:40:49 no-preload-095885 kubelet[2276]: I1027 19:40:49.362826    2276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/42b05fb3-87d3-412f-ac73-cb73a737aab1-cni-cfg\") pod \"kindnet-8lbz5\" (UID: \"42b05fb3-87d3-412f-ac73-cb73a737aab1\") " pod="kube-system/kindnet-8lbz5"
	Oct 27 19:40:49 no-preload-095885 kubelet[2276]: I1027 19:40:49.362861    2276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-258bv\" (UniqueName: \"kubernetes.io/projected/42b05fb3-87d3-412f-ac73-cb73a737aab1-kube-api-access-258bv\") pod \"kindnet-8lbz5\" (UID: \"42b05fb3-87d3-412f-ac73-cb73a737aab1\") " pod="kube-system/kindnet-8lbz5"
	Oct 27 19:40:49 no-preload-095885 kubelet[2276]: I1027 19:40:49.362901    2276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/42b05fb3-87d3-412f-ac73-cb73a737aab1-xtables-lock\") pod \"kindnet-8lbz5\" (UID: \"42b05fb3-87d3-412f-ac73-cb73a737aab1\") " pod="kube-system/kindnet-8lbz5"
	Oct 27 19:40:49 no-preload-095885 kubelet[2276]: I1027 19:40:49.362926    2276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/339cb07c-5319-4d8b-ab61-a6d377c2bc61-kube-proxy\") pod \"kube-proxy-wz64m\" (UID: \"339cb07c-5319-4d8b-ab61-a6d377c2bc61\") " pod="kube-system/kube-proxy-wz64m"
	Oct 27 19:40:49 no-preload-095885 kubelet[2276]: I1027 19:40:49.362946    2276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/339cb07c-5319-4d8b-ab61-a6d377c2bc61-lib-modules\") pod \"kube-proxy-wz64m\" (UID: \"339cb07c-5319-4d8b-ab61-a6d377c2bc61\") " pod="kube-system/kube-proxy-wz64m"
	Oct 27 19:40:50 no-preload-095885 kubelet[2276]: I1027 19:40:50.388099    2276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wz64m" podStartSLOduration=1.3880762930000001 podStartE2EDuration="1.388076293s" podCreationTimestamp="2025-10-27 19:40:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:40:50.387873736 +0000 UTC m=+6.154515995" watchObservedRunningTime="2025-10-27 19:40:50.388076293 +0000 UTC m=+6.154718553"
	Oct 27 19:40:54 no-preload-095885 kubelet[2276]: I1027 19:40:54.093865    2276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-8lbz5" podStartSLOduration=2.963024362 podStartE2EDuration="5.093842372s" podCreationTimestamp="2025-10-27 19:40:49 +0000 UTC" firstStartedPulling="2025-10-27 19:40:49.603352902 +0000 UTC m=+5.369995156" lastFinishedPulling="2025-10-27 19:40:51.73417092 +0000 UTC m=+7.500813166" observedRunningTime="2025-10-27 19:40:52.39670863 +0000 UTC m=+8.163350890" watchObservedRunningTime="2025-10-27 19:40:54.093842372 +0000 UTC m=+9.860484631"
	Oct 27 19:41:02 no-preload-095885 kubelet[2276]: I1027 19:41:02.384260    2276 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 27 19:41:02 no-preload-095885 kubelet[2276]: I1027 19:41:02.448971    2276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e8283562-be98-444b-b591-a0239860e729-tmp\") pod \"storage-provisioner\" (UID: \"e8283562-be98-444b-b591-a0239860e729\") " pod="kube-system/storage-provisioner"
	Oct 27 19:41:02 no-preload-095885 kubelet[2276]: I1027 19:41:02.449035    2276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkkhc\" (UniqueName: \"kubernetes.io/projected/e8283562-be98-444b-b591-a0239860e729-kube-api-access-tkkhc\") pod \"storage-provisioner\" (UID: \"e8283562-be98-444b-b591-a0239860e729\") " pod="kube-system/storage-provisioner"
	Oct 27 19:41:02 no-preload-095885 kubelet[2276]: I1027 19:41:02.449067    2276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bcd75c1-f42f-4252-b1fc-2bdab3c8373e-config-volume\") pod \"coredns-66bc5c9577-gwqvg\" (UID: \"3bcd75c1-f42f-4252-b1fc-2bdab3c8373e\") " pod="kube-system/coredns-66bc5c9577-gwqvg"
	Oct 27 19:41:02 no-preload-095885 kubelet[2276]: I1027 19:41:02.449098    2276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-224g6\" (UniqueName: \"kubernetes.io/projected/3bcd75c1-f42f-4252-b1fc-2bdab3c8373e-kube-api-access-224g6\") pod \"coredns-66bc5c9577-gwqvg\" (UID: \"3bcd75c1-f42f-4252-b1fc-2bdab3c8373e\") " pod="kube-system/coredns-66bc5c9577-gwqvg"
	Oct 27 19:41:03 no-preload-095885 kubelet[2276]: I1027 19:41:03.421851    2276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.421826186 podStartE2EDuration="13.421826186s" podCreationTimestamp="2025-10-27 19:40:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:41:03.421552328 +0000 UTC m=+19.188194598" watchObservedRunningTime="2025-10-27 19:41:03.421826186 +0000 UTC m=+19.188468450"
	Oct 27 19:41:03 no-preload-095885 kubelet[2276]: I1027 19:41:03.434154    2276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-gwqvg" podStartSLOduration=14.434113215 podStartE2EDuration="14.434113215s" podCreationTimestamp="2025-10-27 19:40:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:41:03.433878849 +0000 UTC m=+19.200521109" watchObservedRunningTime="2025-10-27 19:41:03.434113215 +0000 UTC m=+19.200755475"
	Oct 27 19:41:05 no-preload-095885 kubelet[2276]: I1027 19:41:05.368094    2276 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhhng\" (UniqueName: \"kubernetes.io/projected/0b9552df-1e78-4109-bc0e-2632454d1b25-kube-api-access-vhhng\") pod \"busybox\" (UID: \"0b9552df-1e78-4109-bc0e-2632454d1b25\") " pod="default/busybox"
	Oct 27 19:41:06 no-preload-095885 kubelet[2276]: I1027 19:41:06.433233    2276 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.730912216 podStartE2EDuration="1.433208905s" podCreationTimestamp="2025-10-27 19:41:05 +0000 UTC" firstStartedPulling="2025-10-27 19:41:05.607881455 +0000 UTC m=+21.374523695" lastFinishedPulling="2025-10-27 19:41:06.310178133 +0000 UTC m=+22.076820384" observedRunningTime="2025-10-27 19:41:06.433205236 +0000 UTC m=+22.199847496" watchObservedRunningTime="2025-10-27 19:41:06.433208905 +0000 UTC m=+22.199851167"
	
	
	==> storage-provisioner [dbccfaf79ec70e170fb96e4a63d52b098b52bfd3f97c48253850def4e8e07291] <==
	I1027 19:41:02.787149       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1027 19:41:02.795508       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 19:41:02.795562       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1027 19:41:02.798431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:41:02.804103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 19:41:02.804404       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 19:41:02.804589       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a17df180-0dc3-44e5-84d2-7fe25e687623", APIVersion:"v1", ResourceVersion:"405", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-095885_41904f9c-385b-4102-923d-127a6f3bd5fe became leader
	I1027 19:41:02.805185       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-095885_41904f9c-385b-4102-923d-127a6f3bd5fe!
	W1027 19:41:02.808622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:41:02.814520       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 19:41:02.905952       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-095885_41904f9c-385b-4102-923d-127a6f3bd5fe!
	W1027 19:41:04.817564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:41:04.822116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:41:06.825587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:41:06.830209       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:41:08.833526       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:41:08.837877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:41:10.841759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:41:10.850915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:41:12.854047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:41:12.858243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:41:14.862755       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:41:14.871020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-095885 -n no-preload-095885
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-095885 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-919237 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-919237 --alsologtostderr -v=1: exit status 80 (2.202466597s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-919237 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:41:58.280459  608326 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:41:58.280758  608326 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:41:58.280770  608326 out.go:374] Setting ErrFile to fd 2...
	I1027 19:41:58.280776  608326 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:41:58.281018  608326 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 19:41:58.281340  608326 out.go:368] Setting JSON to false
	I1027 19:41:58.281412  608326 mustload.go:65] Loading cluster: embed-certs-919237
	I1027 19:41:58.281800  608326 config.go:182] Loaded profile config "embed-certs-919237": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:41:58.282280  608326 cli_runner.go:164] Run: docker container inspect embed-certs-919237 --format={{.State.Status}}
	I1027 19:41:58.302562  608326 host.go:66] Checking if "embed-certs-919237" exists ...
	I1027 19:41:58.302883  608326 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:41:58.363396  608326 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-27 19:41:58.350973261 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:41:58.364030  608326 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-919237 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1027 19:41:58.367001  608326 out.go:179] * Pausing node embed-certs-919237 ... 
	I1027 19:41:58.368432  608326 host.go:66] Checking if "embed-certs-919237" exists ...
	I1027 19:41:58.368735  608326 ssh_runner.go:195] Run: systemctl --version
	I1027 19:41:58.368776  608326 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-919237
	I1027 19:41:58.389320  608326 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33445 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/embed-certs-919237/id_rsa Username:docker}
	I1027 19:41:58.493911  608326 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:41:58.509690  608326 pause.go:52] kubelet running: true
	I1027 19:41:58.509770  608326 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 19:41:58.710428  608326 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 19:41:58.710516  608326 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 19:41:58.789620  608326 cri.go:89] found id: "039af7dcecc8a433ded3d11e5ded2256d549ee2d08a3ebb68b26fce310e7bc20"
	I1027 19:41:58.789644  608326 cri.go:89] found id: "7e47ca072fa9116cec1fe31e6e1e2cc19a4993f2a1a0cb5170d906761e491b77"
	I1027 19:41:58.789650  608326 cri.go:89] found id: "289d461e95e5c9245c97d39c39a8fdc2ca0d89a5aaf6adc05990cee406a99fc5"
	I1027 19:41:58.789655  608326 cri.go:89] found id: "11808765eb85f990868220937b5849982fa806cf6e9924886c92e66e31f11278"
	I1027 19:41:58.789659  608326 cri.go:89] found id: "ae6c32d15d0a354896e509d903d2913f4e4cb318fee7570b0a381a4da1276a5b"
	I1027 19:41:58.789664  608326 cri.go:89] found id: "d5a5c65a74b4b0bac782941ddf5cfc5e1c95eb29dbc563a89bc74143a3d75be8"
	I1027 19:41:58.789668  608326 cri.go:89] found id: "f0dcb6f33c4a16c8aabf1c9522c219dfe57ce0438d6eedb8d11b3bbed06bf220"
	I1027 19:41:58.789672  608326 cri.go:89] found id: "d17bd312e4c2b6e68ce5e1c0006ad10d3d74b77c3bc3e8570e4526763c6914a9"
	I1027 19:41:58.789676  608326 cri.go:89] found id: "31682e1eceede1979fd31aa2e96a71541d29f7d036de012b0c0a406025482670"
	I1027 19:41:58.789695  608326 cri.go:89] found id: "2796a5fed0754fd4b112fae38588dfe25b86705e56508393208766dc3b088d33"
	I1027 19:41:58.789711  608326 cri.go:89] found id: "121601c64b1f8275f26411958ad9a6732beea758cb85fefc8db2ea3c291abd87"
	I1027 19:41:58.789715  608326 cri.go:89] found id: ""
	I1027 19:41:58.789770  608326 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:41:58.803766  608326 retry.go:31] will retry after 148.161639ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:41:58Z" level=error msg="open /run/runc: no such file or directory"
	I1027 19:41:58.952163  608326 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:41:58.967315  608326 pause.go:52] kubelet running: false
	I1027 19:41:58.967377  608326 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 19:41:59.127671  608326 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 19:41:59.127784  608326 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 19:41:59.209802  608326 cri.go:89] found id: "039af7dcecc8a433ded3d11e5ded2256d549ee2d08a3ebb68b26fce310e7bc20"
	I1027 19:41:59.209830  608326 cri.go:89] found id: "7e47ca072fa9116cec1fe31e6e1e2cc19a4993f2a1a0cb5170d906761e491b77"
	I1027 19:41:59.209836  608326 cri.go:89] found id: "289d461e95e5c9245c97d39c39a8fdc2ca0d89a5aaf6adc05990cee406a99fc5"
	I1027 19:41:59.209840  608326 cri.go:89] found id: "11808765eb85f990868220937b5849982fa806cf6e9924886c92e66e31f11278"
	I1027 19:41:59.209844  608326 cri.go:89] found id: "ae6c32d15d0a354896e509d903d2913f4e4cb318fee7570b0a381a4da1276a5b"
	I1027 19:41:59.209849  608326 cri.go:89] found id: "d5a5c65a74b4b0bac782941ddf5cfc5e1c95eb29dbc563a89bc74143a3d75be8"
	I1027 19:41:59.209853  608326 cri.go:89] found id: "f0dcb6f33c4a16c8aabf1c9522c219dfe57ce0438d6eedb8d11b3bbed06bf220"
	I1027 19:41:59.209857  608326 cri.go:89] found id: "d17bd312e4c2b6e68ce5e1c0006ad10d3d74b77c3bc3e8570e4526763c6914a9"
	I1027 19:41:59.209861  608326 cri.go:89] found id: "31682e1eceede1979fd31aa2e96a71541d29f7d036de012b0c0a406025482670"
	I1027 19:41:59.209868  608326 cri.go:89] found id: "2796a5fed0754fd4b112fae38588dfe25b86705e56508393208766dc3b088d33"
	I1027 19:41:59.209872  608326 cri.go:89] found id: "121601c64b1f8275f26411958ad9a6732beea758cb85fefc8db2ea3c291abd87"
	I1027 19:41:59.209876  608326 cri.go:89] found id: ""
	I1027 19:41:59.209938  608326 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:41:59.224991  608326 retry.go:31] will retry after 301.397611ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:41:59Z" level=error msg="open /run/runc: no such file or directory"
	I1027 19:41:59.527578  608326 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:41:59.542607  608326 pause.go:52] kubelet running: false
	I1027 19:41:59.542675  608326 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 19:41:59.696406  608326 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 19:41:59.696487  608326 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 19:41:59.767783  608326 cri.go:89] found id: "039af7dcecc8a433ded3d11e5ded2256d549ee2d08a3ebb68b26fce310e7bc20"
	I1027 19:41:59.767809  608326 cri.go:89] found id: "7e47ca072fa9116cec1fe31e6e1e2cc19a4993f2a1a0cb5170d906761e491b77"
	I1027 19:41:59.767813  608326 cri.go:89] found id: "289d461e95e5c9245c97d39c39a8fdc2ca0d89a5aaf6adc05990cee406a99fc5"
	I1027 19:41:59.767816  608326 cri.go:89] found id: "11808765eb85f990868220937b5849982fa806cf6e9924886c92e66e31f11278"
	I1027 19:41:59.767819  608326 cri.go:89] found id: "ae6c32d15d0a354896e509d903d2913f4e4cb318fee7570b0a381a4da1276a5b"
	I1027 19:41:59.767822  608326 cri.go:89] found id: "d5a5c65a74b4b0bac782941ddf5cfc5e1c95eb29dbc563a89bc74143a3d75be8"
	I1027 19:41:59.767824  608326 cri.go:89] found id: "f0dcb6f33c4a16c8aabf1c9522c219dfe57ce0438d6eedb8d11b3bbed06bf220"
	I1027 19:41:59.767827  608326 cri.go:89] found id: "d17bd312e4c2b6e68ce5e1c0006ad10d3d74b77c3bc3e8570e4526763c6914a9"
	I1027 19:41:59.767829  608326 cri.go:89] found id: "31682e1eceede1979fd31aa2e96a71541d29f7d036de012b0c0a406025482670"
	I1027 19:41:59.767840  608326 cri.go:89] found id: "2796a5fed0754fd4b112fae38588dfe25b86705e56508393208766dc3b088d33"
	I1027 19:41:59.767843  608326 cri.go:89] found id: "121601c64b1f8275f26411958ad9a6732beea758cb85fefc8db2ea3c291abd87"
	I1027 19:41:59.767845  608326 cri.go:89] found id: ""
	I1027 19:41:59.767883  608326 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:41:59.780439  608326 retry.go:31] will retry after 375.462894ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:41:59Z" level=error msg="open /run/runc: no such file or directory"
	I1027 19:42:00.157175  608326 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:42:00.171626  608326 pause.go:52] kubelet running: false
	I1027 19:42:00.171694  608326 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 19:42:00.321069  608326 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 19:42:00.321221  608326 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 19:42:00.393156  608326 cri.go:89] found id: "039af7dcecc8a433ded3d11e5ded2256d549ee2d08a3ebb68b26fce310e7bc20"
	I1027 19:42:00.393183  608326 cri.go:89] found id: "7e47ca072fa9116cec1fe31e6e1e2cc19a4993f2a1a0cb5170d906761e491b77"
	I1027 19:42:00.393187  608326 cri.go:89] found id: "289d461e95e5c9245c97d39c39a8fdc2ca0d89a5aaf6adc05990cee406a99fc5"
	I1027 19:42:00.393190  608326 cri.go:89] found id: "11808765eb85f990868220937b5849982fa806cf6e9924886c92e66e31f11278"
	I1027 19:42:00.393193  608326 cri.go:89] found id: "ae6c32d15d0a354896e509d903d2913f4e4cb318fee7570b0a381a4da1276a5b"
	I1027 19:42:00.393196  608326 cri.go:89] found id: "d5a5c65a74b4b0bac782941ddf5cfc5e1c95eb29dbc563a89bc74143a3d75be8"
	I1027 19:42:00.393199  608326 cri.go:89] found id: "f0dcb6f33c4a16c8aabf1c9522c219dfe57ce0438d6eedb8d11b3bbed06bf220"
	I1027 19:42:00.393202  608326 cri.go:89] found id: "d17bd312e4c2b6e68ce5e1c0006ad10d3d74b77c3bc3e8570e4526763c6914a9"
	I1027 19:42:00.393204  608326 cri.go:89] found id: "31682e1eceede1979fd31aa2e96a71541d29f7d036de012b0c0a406025482670"
	I1027 19:42:00.393210  608326 cri.go:89] found id: "2796a5fed0754fd4b112fae38588dfe25b86705e56508393208766dc3b088d33"
	I1027 19:42:00.393213  608326 cri.go:89] found id: "121601c64b1f8275f26411958ad9a6732beea758cb85fefc8db2ea3c291abd87"
	I1027 19:42:00.393217  608326 cri.go:89] found id: ""
	I1027 19:42:00.393297  608326 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:42:00.408744  608326 out.go:203] 
	W1027 19:42:00.410535  608326 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:42:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:42:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 19:42:00.410555  608326 out.go:285] * 
	* 
	W1027 19:42:00.415287  608326 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 19:42:00.416962  608326 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-919237 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-919237
helpers_test.go:243: (dbg) docker inspect embed-certs-919237:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "37808aa2dc4c4127748e535c42c1ec4333eeed40f14d98040de3f085b9d38b11",
	        "Created": "2025-10-27T19:39:55.06890143Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 595076,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T19:41:01.094759341Z",
	            "FinishedAt": "2025-10-27T19:40:59.997815947Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/37808aa2dc4c4127748e535c42c1ec4333eeed40f14d98040de3f085b9d38b11/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/37808aa2dc4c4127748e535c42c1ec4333eeed40f14d98040de3f085b9d38b11/hostname",
	        "HostsPath": "/var/lib/docker/containers/37808aa2dc4c4127748e535c42c1ec4333eeed40f14d98040de3f085b9d38b11/hosts",
	        "LogPath": "/var/lib/docker/containers/37808aa2dc4c4127748e535c42c1ec4333eeed40f14d98040de3f085b9d38b11/37808aa2dc4c4127748e535c42c1ec4333eeed40f14d98040de3f085b9d38b11-json.log",
	        "Name": "/embed-certs-919237",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-919237:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-919237",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "37808aa2dc4c4127748e535c42c1ec4333eeed40f14d98040de3f085b9d38b11",
	                "LowerDir": "/var/lib/docker/overlay2/1a197dc40b03763e74d9e2a466d399c472fd8d02996bb7655be8275cee948408-init/diff:/var/lib/docker/overlay2/71b61ec94610a35f2d924dec358052d4c154c36b3fe219802f60246ca2dc7f45/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1a197dc40b03763e74d9e2a466d399c472fd8d02996bb7655be8275cee948408/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1a197dc40b03763e74d9e2a466d399c472fd8d02996bb7655be8275cee948408/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1a197dc40b03763e74d9e2a466d399c472fd8d02996bb7655be8275cee948408/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-919237",
	                "Source": "/var/lib/docker/volumes/embed-certs-919237/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-919237",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-919237",
	                "name.minikube.sigs.k8s.io": "embed-certs-919237",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "25e7f0ae99fb61ccb55e65b521f4a1429e4fc658c4e3437bc5de7a9bbaa40a2a",
	            "SandboxKey": "/var/run/docker/netns/25e7f0ae99fb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-919237": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d6:83:26:8b:b3:ca",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "999393307eef706ac69479cce1c654e615bbf1533042b5bf717c2605b3087cda",
	                    "EndpointID": "b08e9f9071cbcc8b4abf81b36718fc0b0c73b18c70ca41a4a70b65f312907880",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-919237",
	                        "37808aa2dc4c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-919237 -n embed-certs-919237
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-919237 -n embed-certs-919237: exit status 2 (377.432033ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-919237 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-919237 logs -n 25: (1.251195967s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p functional-051715 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                                                │ functional-051715            │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │                     │
	│ start   │ -p functional-051715 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                          │ functional-051715            │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │                     │
	│ addons  │ functional-051715 addons list                                                                                                                                            │ functional-051715            │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ addons  │ functional-051715 addons list -o json                                                                                                                                    │ functional-051715            │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image   │ functional-051715 image load --daemon kicbase/echo-server:functional-051715 --alsologtostderr                                                                            │ functional-051715            │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image   │ functional-051715 image ls                                                                                                                                               │ functional-051715            │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image   │ functional-051715 image load --daemon kicbase/echo-server:functional-051715 --alsologtostderr                                                                            │ functional-051715            │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image   │ functional-051715 image ls                                                                                                                                               │ functional-051715            │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image   │ functional-051715 image load --daemon kicbase/echo-server:functional-051715 --alsologtostderr                                                                            │ functional-051715            │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image   │ functional-051715 image ls                                                                                                                                               │ functional-051715            │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image   │ functional-051715 image save kicbase/echo-server:functional-051715 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr          │ functional-051715            │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image   │ functional-051715 image rm kicbase/echo-server:functional-051715 --alsologtostderr                                                                                       │ functional-051715            │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ addons  │ enable dashboard -p embed-certs-919237 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ start   │ -p embed-certs-919237 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ image   │ old-k8s-version-468959 image list --format=json                                                                                                                          │ old-k8s-version-468959       │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ pause   │ -p old-k8s-version-468959 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-468959       │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-095885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │                     │
	│ stop    │ -p no-preload-095885 --alsologtostderr -v=3                                                                                                                              │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ delete  │ -p old-k8s-version-468959                                                                                                                                                │ old-k8s-version-468959       │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ delete  │ -p old-k8s-version-468959                                                                                                                                                │ old-k8s-version-468959       │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ start   │ -p default-k8s-diff-port-813397 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-813397 │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-095885 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ start   │ -p no-preload-095885 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │                     │
	│ image   │ embed-certs-919237 image list --format=json                                                                                                                              │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ pause   │ -p embed-certs-919237 --alsologtostderr -v=1                                                                                                                             │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 19:41:33
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 19:41:33.514682  604470 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:41:33.515411  604470 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:41:33.515424  604470 out.go:374] Setting ErrFile to fd 2...
	I1027 19:41:33.515429  604470 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:41:33.515802  604470 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 19:41:33.516655  604470 out.go:368] Setting JSON to false
	I1027 19:41:33.518426  604470 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8643,"bootTime":1761585451,"procs":466,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 19:41:33.518533  604470 start.go:141] virtualization: kvm guest
	I1027 19:41:33.521798  604470 out.go:179] * [no-preload-095885] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 19:41:33.523807  604470 notify.go:220] Checking for updates...
	I1027 19:41:33.523873  604470 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 19:41:33.525256  604470 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 19:41:33.527429  604470 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:41:33.529037  604470 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-352833/.minikube
	I1027 19:41:33.530518  604470 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 19:41:33.531892  604470 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 19:41:33.533881  604470 config.go:182] Loaded profile config "no-preload-095885": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:41:33.534704  604470 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 19:41:33.565326  604470 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 19:41:33.565443  604470 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:41:33.642975  604470 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-27 19:41:33.629380203 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:41:33.643123  604470 docker.go:318] overlay module found
	I1027 19:41:33.645093  604470 out.go:179] * Using the docker driver based on existing profile
	I1027 19:41:33.646962  604470 start.go:305] selected driver: docker
	I1027 19:41:33.646981  604470 start.go:925] validating driver "docker" against &{Name:no-preload-095885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-095885 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:41:33.647102  604470 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 19:41:33.647893  604470 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:41:33.721579  604470 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-27 19:41:33.709869722 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:41:33.721933  604470 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 19:41:33.721961  604470 cni.go:84] Creating CNI manager for ""
	I1027 19:41:33.722022  604470 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:41:33.722069  604470 start.go:349] cluster config:
	{Name:no-preload-095885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-095885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:41:33.726488  604470 out.go:179] * Starting "no-preload-095885" primary control-plane node in "no-preload-095885" cluster
	I1027 19:41:33.728321  604470 cache.go:123] Beginning downloading kic base image for docker with crio
	I1027 19:41:33.729739  604470 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 19:41:33.731046  604470 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:41:33.731164  604470 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 19:41:33.731217  604470 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/config.json ...
	I1027 19:41:33.731442  604470 cache.go:107] acquiring lock: {Name:mk6cfd97bf118a5d00dc3712cc15a56368d5b133 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:41:33.731465  604470 cache.go:107] acquiring lock: {Name:mk849f9e68d9ca24fd7e38d749b2eace2906ff3c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:41:33.731506  604470 cache.go:107] acquiring lock: {Name:mk5369f4c071c5263ddc432fb15330ba0423cdfe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:41:33.731514  604470 cache.go:107] acquiring lock: {Name:mk55852f2c481df2db7f9a6da7c274b8e85d7edb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:41:33.731573  604470 cache.go:115] /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1027 19:41:33.731557  604470 cache.go:107] acquiring lock: {Name:mk5cfaf9a7e19dd9a7184f304b6ee85a4979e6eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:41:33.731591  604470 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 145.415µs
	I1027 19:41:33.731600  604470 cache.go:115] /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1027 19:41:33.731613  604470 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1027 19:41:33.731613  604470 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 115.184µs
	I1027 19:41:33.731579  604470 cache.go:115] /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1027 19:41:33.731628  604470 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1027 19:41:33.731628  604470 cache.go:115] /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1027 19:41:33.731594  604470 cache.go:115] /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1027 19:41:33.731442  604470 cache.go:107] acquiring lock: {Name:mk01b17b21d46030a4c787d0bd4e9fe1b72ed247 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:41:33.731643  604470 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 89.177µs
	I1027 19:41:33.731647  604470 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 142.388µs
	I1027 19:41:33.731636  604470 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 208.835µs
	I1027 19:41:33.731650  604470 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1027 19:41:33.731639  604470 cache.go:107] acquiring lock: {Name:mka4e762c0cdf96fdeade218e5825c211c417983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:41:33.731669  604470 cache.go:115] /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1027 19:41:33.731656  604470 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1027 19:41:33.731661  604470 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1027 19:41:33.731608  604470 cache.go:107] acquiring lock: {Name:mk2ed104f61ec06a04ca37afb2389902cee0a37d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:41:33.731682  604470 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 258.93µs
	I1027 19:41:33.731825  604470 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1027 19:41:33.731690  604470 cache.go:115] /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1027 19:41:33.731840  604470 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 201.476µs
	I1027 19:41:33.731842  604470 cache.go:115] /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1027 19:41:33.731849  604470 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1027 19:41:33.731856  604470 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 250.202µs
	I1027 19:41:33.731876  604470 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1027 19:41:33.731896  604470 cache.go:87] Successfully saved all images to host disk.
	I1027 19:41:33.755554  604470 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 19:41:33.755575  604470 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 19:41:33.755596  604470 cache.go:232] Successfully downloaded all kic artifacts
	I1027 19:41:33.755626  604470 start.go:360] acquireMachinesLock for no-preload-095885: {Name:mk5366014920cd048c3c430c094258bb47a34d04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:41:33.755690  604470 start.go:364] duration metric: took 42.502µs to acquireMachinesLock for "no-preload-095885"
	I1027 19:41:33.755710  604470 start.go:96] Skipping create...Using existing machine configuration
	I1027 19:41:33.755718  604470 fix.go:54] fixHost starting: 
	I1027 19:41:33.755966  604470 cli_runner.go:164] Run: docker container inspect no-preload-095885 --format={{.State.Status}}
	I1027 19:41:33.777426  604470 fix.go:112] recreateIfNeeded on no-preload-095885: state=Stopped err=<nil>
	W1027 19:41:33.777478  604470 fix.go:138] unexpected machine state, will restart: <nil>
	I1027 19:41:33.194337  601731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1027 19:41:33.218799  601731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/default-k8s-diff-port-813397/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1027 19:41:33.242500  601731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/default-k8s-diff-port-813397/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 19:41:33.265885  601731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/default-k8s-diff-port-813397/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 19:41:33.290549  601731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/default-k8s-diff-port-813397/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 19:41:33.314587  601731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/certs/356415.pem --> /usr/share/ca-certificates/356415.pem (1338 bytes)
	I1027 19:41:33.338249  601731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem --> /usr/share/ca-certificates/3564152.pem (1708 bytes)
	I1027 19:41:33.361878  601731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 19:41:33.385457  601731 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 19:41:33.402288  601731 ssh_runner.go:195] Run: openssl version
	I1027 19:41:33.409720  601731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3564152.pem && ln -fs /usr/share/ca-certificates/3564152.pem /etc/ssl/certs/3564152.pem"
	I1027 19:41:33.421080  601731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3564152.pem
	I1027 19:41:33.426177  601731 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 19:02 /usr/share/ca-certificates/3564152.pem
	I1027 19:41:33.426242  601731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3564152.pem
	I1027 19:41:33.470633  601731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3564152.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 19:41:33.481461  601731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 19:41:33.493492  601731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:41:33.498757  601731 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:41:33.498838  601731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:41:33.542807  601731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 19:41:33.553991  601731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/356415.pem && ln -fs /usr/share/ca-certificates/356415.pem /etc/ssl/certs/356415.pem"
	I1027 19:41:33.566061  601731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356415.pem
	I1027 19:41:33.570984  601731 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 19:02 /usr/share/ca-certificates/356415.pem
	I1027 19:41:33.571064  601731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356415.pem
	I1027 19:41:33.629950  601731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/356415.pem /etc/ssl/certs/51391683.0"
	I1027 19:41:33.642594  601731 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 19:41:33.647802  601731 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 19:41:33.647868  601731 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-813397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-813397 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:41:33.647939  601731 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 19:41:33.647995  601731 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 19:41:33.695399  601731 cri.go:89] found id: ""
	I1027 19:41:33.696577  601731 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 19:41:33.708281  601731 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 19:41:33.718397  601731 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1027 19:41:33.718470  601731 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 19:41:33.728790  601731 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 19:41:33.728808  601731 kubeadm.go:157] found existing configuration files:
	
	I1027 19:41:33.728869  601731 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1027 19:41:33.738176  601731 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 19:41:33.738253  601731 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 19:41:33.747937  601731 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1027 19:41:33.758236  601731 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 19:41:33.758298  601731 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 19:41:33.767392  601731 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1027 19:41:33.777962  601731 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 19:41:33.778033  601731 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 19:41:33.788710  601731 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1027 19:41:33.799716  601731 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 19:41:33.799778  601731 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 19:41:33.809879  601731 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 19:41:33.861238  601731 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1027 19:41:33.861332  601731 kubeadm.go:318] [preflight] Running pre-flight checks
	I1027 19:41:33.907926  601731 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1027 19:41:33.908017  601731 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1027 19:41:33.908061  601731 kubeadm.go:318] OS: Linux
	I1027 19:41:33.908119  601731 kubeadm.go:318] CGROUPS_CPU: enabled
	I1027 19:41:33.908222  601731 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1027 19:41:33.908299  601731 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1027 19:41:33.908409  601731 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1027 19:41:33.908489  601731 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1027 19:41:33.908553  601731 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1027 19:41:33.908641  601731 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1027 19:41:33.908719  601731 kubeadm.go:318] CGROUPS_IO: enabled
	I1027 19:41:33.984763  601731 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 19:41:33.984961  601731 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 19:41:33.985176  601731 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 19:41:33.993580  601731 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1027 19:41:31.563826  594803 pod_ready.go:104] pod "coredns-66bc5c9577-9b9tz" is not "Ready", error: <nil>
	W1027 19:41:33.564360  594803 pod_ready.go:104] pod "coredns-66bc5c9577-9b9tz" is not "Ready", error: <nil>
	I1027 19:41:33.140207  565798 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1027 19:41:33.140728  565798 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1027 19:41:33.140789  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:41:33.140851  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:41:33.175830  565798 cri.go:89] found id: "f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:41:33.175856  565798 cri.go:89] found id: ""
	I1027 19:41:33.175867  565798 logs.go:282] 1 containers: [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8]
	I1027 19:41:33.175931  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:33.180762  565798 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:41:33.180837  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:41:33.221850  565798 cri.go:89] found id: ""
	I1027 19:41:33.221877  565798 logs.go:282] 0 containers: []
	W1027 19:41:33.221885  565798 logs.go:284] No container was found matching "etcd"
	I1027 19:41:33.221891  565798 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:41:33.221938  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:41:33.258954  565798 cri.go:89] found id: ""
	I1027 19:41:33.258985  565798 logs.go:282] 0 containers: []
	W1027 19:41:33.258997  565798 logs.go:284] No container was found matching "coredns"
	I1027 19:41:33.259005  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:41:33.259063  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:41:33.291276  565798 cri.go:89] found id: "15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:41:33.291297  565798 cri.go:89] found id: ""
	I1027 19:41:33.291307  565798 logs.go:282] 1 containers: [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8]
	I1027 19:41:33.291378  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:33.295942  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:41:33.296011  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:41:33.327200  565798 cri.go:89] found id: ""
	I1027 19:41:33.327230  565798 logs.go:282] 0 containers: []
	W1027 19:41:33.327241  565798 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:41:33.327250  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:41:33.327332  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:41:33.360699  565798 cri.go:89] found id: "38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:33.360724  565798 cri.go:89] found id: ""
	I1027 19:41:33.360735  565798 logs.go:282] 1 containers: [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77]
	I1027 19:41:33.360801  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:33.366056  565798 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:41:33.366187  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:41:33.397708  565798 cri.go:89] found id: ""
	I1027 19:41:33.397739  565798 logs.go:282] 0 containers: []
	W1027 19:41:33.397758  565798 logs.go:284] No container was found matching "kindnet"
	I1027 19:41:33.397767  565798 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:41:33.397834  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:41:33.431241  565798 cri.go:89] found id: ""
	I1027 19:41:33.431280  565798 logs.go:282] 0 containers: []
	W1027 19:41:33.431291  565798 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:41:33.431305  565798 logs.go:123] Gathering logs for dmesg ...
	I1027 19:41:33.431324  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:41:33.457468  565798 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:41:33.457511  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:41:33.527661  565798 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:41:33.527677  565798 logs.go:123] Gathering logs for kube-apiserver [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8] ...
	I1027 19:41:33.527691  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:41:33.571916  565798 logs.go:123] Gathering logs for kube-scheduler [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8] ...
	I1027 19:41:33.572034  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:41:33.653029  565798 logs.go:123] Gathering logs for kube-controller-manager [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77] ...
	I1027 19:41:33.653063  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:33.694773  565798 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:41:33.694814  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:41:33.753557  565798 logs.go:123] Gathering logs for container status ...
	I1027 19:41:33.753603  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:41:33.792559  565798 logs.go:123] Gathering logs for kubelet ...
	I1027 19:41:33.792601  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:41:36.406613  565798 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1027 19:41:36.407062  565798 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1027 19:41:36.407124  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:41:36.407210  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:41:36.437491  565798 cri.go:89] found id: "f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:41:36.437514  565798 cri.go:89] found id: ""
	I1027 19:41:36.437525  565798 logs.go:282] 1 containers: [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8]
	I1027 19:41:36.437589  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:36.442000  565798 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:41:36.442074  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:41:36.469993  565798 cri.go:89] found id: ""
	I1027 19:41:36.470025  565798 logs.go:282] 0 containers: []
	W1027 19:41:36.470034  565798 logs.go:284] No container was found matching "etcd"
	I1027 19:41:36.470043  565798 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:41:36.470125  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:41:36.498575  565798 cri.go:89] found id: ""
	I1027 19:41:36.498617  565798 logs.go:282] 0 containers: []
	W1027 19:41:36.498629  565798 logs.go:284] No container was found matching "coredns"
	I1027 19:41:36.498638  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:41:36.498692  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:41:36.528423  565798 cri.go:89] found id: "15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:41:36.528443  565798 cri.go:89] found id: ""
	I1027 19:41:36.528452  565798 logs.go:282] 1 containers: [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8]
	I1027 19:41:36.528501  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:36.532552  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:41:36.532614  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:41:33.996937  601731 out.go:252]   - Generating certificates and keys ...
	I1027 19:41:33.997063  601731 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1027 19:41:33.997199  601731 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1027 19:41:34.054826  601731 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 19:41:34.221369  601731 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1027 19:41:34.781385  601731 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1027 19:41:35.318555  601731 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1027 19:41:35.767616  601731 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1027 19:41:35.767790  601731 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-813397 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1027 19:41:36.405347  601731 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1027 19:41:36.405616  601731 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-813397 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1027 19:41:36.791820  601731 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 19:41:37.058751  601731 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 19:41:37.258786  601731 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1027 19:41:37.258878  601731 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 19:41:37.352340  601731 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 19:41:37.607719  601731 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 19:41:37.743836  601731 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 19:41:38.112562  601731 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 19:41:38.293385  601731 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 19:41:38.294242  601731 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 19:41:38.298814  601731 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 19:41:33.779666  604470 out.go:252] * Restarting existing docker container for "no-preload-095885" ...
	I1027 19:41:33.779763  604470 cli_runner.go:164] Run: docker start no-preload-095885
	I1027 19:41:34.076071  604470 cli_runner.go:164] Run: docker container inspect no-preload-095885 --format={{.State.Status}}
	I1027 19:41:34.094890  604470 kic.go:430] container "no-preload-095885" state is running.
	I1027 19:41:34.095320  604470 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-095885
	I1027 19:41:34.114967  604470 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/config.json ...
	I1027 19:41:34.115303  604470 machine.go:93] provisionDockerMachine start ...
	I1027 19:41:34.115382  604470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-095885
	I1027 19:41:34.136967  604470 main.go:141] libmachine: Using SSH client type: native
	I1027 19:41:34.137304  604470 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33455 <nil> <nil>}
	I1027 19:41:34.137322  604470 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 19:41:34.137898  604470 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48310->127.0.0.1:33455: read: connection reset by peer
	I1027 19:41:37.281569  604470 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-095885
	
	I1027 19:41:37.281596  604470 ubuntu.go:182] provisioning hostname "no-preload-095885"
	I1027 19:41:37.281656  604470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-095885
	I1027 19:41:37.301392  604470 main.go:141] libmachine: Using SSH client type: native
	I1027 19:41:37.301645  604470 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33455 <nil> <nil>}
	I1027 19:41:37.301664  604470 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-095885 && echo "no-preload-095885" | sudo tee /etc/hostname
	I1027 19:41:37.455099  604470 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-095885
	
	I1027 19:41:37.455202  604470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-095885
	I1027 19:41:37.475318  604470 main.go:141] libmachine: Using SSH client type: native
	I1027 19:41:37.475622  604470 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33455 <nil> <nil>}
	I1027 19:41:37.475644  604470 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-095885' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-095885/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-095885' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 19:41:37.621398  604470 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 19:41:37.621435  604470 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21801-352833/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-352833/.minikube}
	I1027 19:41:37.621491  604470 ubuntu.go:190] setting up certificates
	I1027 19:41:37.621510  604470 provision.go:84] configureAuth start
	I1027 19:41:37.621595  604470 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-095885
	I1027 19:41:37.641096  604470 provision.go:143] copyHostCerts
	I1027 19:41:37.641197  604470 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem, removing ...
	I1027 19:41:37.641215  604470 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem
	I1027 19:41:37.641290  604470 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem (1078 bytes)
	I1027 19:41:37.641404  604470 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem, removing ...
	I1027 19:41:37.641413  604470 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem
	I1027 19:41:37.641451  604470 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem (1123 bytes)
	I1027 19:41:37.641526  604470 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem, removing ...
	I1027 19:41:37.641534  604470 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem
	I1027 19:41:37.641561  604470 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem (1679 bytes)
	I1027 19:41:37.641631  604470 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem org=jenkins.no-preload-095885 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-095885]
	I1027 19:41:37.972712  604470 provision.go:177] copyRemoteCerts
	I1027 19:41:37.972793  604470 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 19:41:37.972845  604470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-095885
	I1027 19:41:37.992046  604470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/no-preload-095885/id_rsa Username:docker}
	I1027 19:41:38.095494  604470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 19:41:38.115591  604470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1027 19:41:38.137819  604470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1027 19:41:38.158115  604470 provision.go:87] duration metric: took 536.582587ms to configureAuth
	I1027 19:41:38.158163  604470 ubuntu.go:206] setting minikube options for container-runtime
	I1027 19:41:38.158375  604470 config.go:182] Loaded profile config "no-preload-095885": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:41:38.158491  604470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-095885
	I1027 19:41:38.179245  604470 main.go:141] libmachine: Using SSH client type: native
	I1027 19:41:38.179483  604470 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33455 <nil> <nil>}
	I1027 19:41:38.179503  604470 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 19:41:38.522710  604470 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 19:41:38.522739  604470 machine.go:96] duration metric: took 4.407414728s to provisionDockerMachine
	I1027 19:41:38.522754  604470 start.go:293] postStartSetup for "no-preload-095885" (driver="docker")
	I1027 19:41:38.522769  604470 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 19:41:38.522844  604470 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 19:41:38.522904  604470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-095885
	I1027 19:41:38.545315  604470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/no-preload-095885/id_rsa Username:docker}
	I1027 19:41:38.649488  604470 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 19:41:38.653619  604470 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 19:41:38.653659  604470 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 19:41:38.653672  604470 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-352833/.minikube/addons for local assets ...
	I1027 19:41:38.653730  604470 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-352833/.minikube/files for local assets ...
	I1027 19:41:38.653828  604470 filesync.go:149] local asset: /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem -> 3564152.pem in /etc/ssl/certs
	I1027 19:41:38.653958  604470 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 19:41:38.662910  604470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem --> /etc/ssl/certs/3564152.pem (1708 bytes)
	I1027 19:41:38.683366  604470 start.go:296] duration metric: took 160.591003ms for postStartSetup
	I1027 19:41:38.683460  604470 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 19:41:38.683508  604470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-095885
	I1027 19:41:38.702733  604470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/no-preload-095885/id_rsa Username:docker}
	I1027 19:41:38.804002  604470 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 19:41:38.809096  604470 fix.go:56] duration metric: took 5.05336892s for fixHost
	I1027 19:41:38.809130  604470 start.go:83] releasing machines lock for "no-preload-095885", held for 5.053425647s
	I1027 19:41:38.809225  604470 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-095885
	I1027 19:41:38.827272  604470 ssh_runner.go:195] Run: cat /version.json
	I1027 19:41:38.827356  604470 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 19:41:38.827387  604470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-095885
	I1027 19:41:38.827418  604470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-095885
	I1027 19:41:38.847513  604470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/no-preload-095885/id_rsa Username:docker}
	I1027 19:41:38.847921  604470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/no-preload-095885/id_rsa Username:docker}
	I1027 19:41:39.000830  604470 ssh_runner.go:195] Run: systemctl --version
	I1027 19:41:39.008003  604470 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 19:41:39.044407  604470 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 19:41:39.049507  604470 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 19:41:39.049581  604470 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 19:41:39.058452  604470 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 19:41:39.058481  604470 start.go:495] detecting cgroup driver to use...
	I1027 19:41:39.058522  604470 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 19:41:39.058578  604470 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 19:41:39.075128  604470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 19:41:39.089607  604470 docker.go:218] disabling cri-docker service (if available) ...
	I1027 19:41:39.089705  604470 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 19:41:39.106103  604470 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 19:41:39.120124  604470 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 19:41:39.207086  604470 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 19:41:39.309063  604470 docker.go:234] disabling docker service ...
	I1027 19:41:39.309129  604470 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 19:41:39.330558  604470 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 19:41:39.352231  604470 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 19:41:39.447280  604470 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 19:41:39.539870  604470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 19:41:39.554998  604470 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 19:41:39.574582  604470 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 19:41:39.574652  604470 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:41:39.586162  604470 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 19:41:39.586238  604470 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:41:39.596423  604470 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:41:39.606735  604470 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:41:39.617112  604470 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 19:41:39.627091  604470 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:41:39.637722  604470 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:41:39.647475  604470 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:41:39.657461  604470 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 19:41:39.665620  604470 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 19:41:39.673923  604470 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:41:39.785097  604470 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 19:41:39.913123  604470 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 19:41:39.913197  604470 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 19:41:39.918027  604470 start.go:563] Will wait 60s for crictl version
	I1027 19:41:39.918097  604470 ssh_runner.go:195] Run: which crictl
	I1027 19:41:39.922727  604470 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 19:41:39.953577  604470 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 19:41:39.953737  604470 ssh_runner.go:195] Run: crio --version
	I1027 19:41:39.995993  604470 ssh_runner.go:195] Run: crio --version
	I1027 19:41:40.036496  604470 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1027 19:41:36.063623  594803 pod_ready.go:104] pod "coredns-66bc5c9577-9b9tz" is not "Ready", error: <nil>
	W1027 19:41:38.562890  594803 pod_ready.go:104] pod "coredns-66bc5c9577-9b9tz" is not "Ready", error: <nil>
	W1027 19:41:40.565556  594803 pod_ready.go:104] pod "coredns-66bc5c9577-9b9tz" is not "Ready", error: <nil>
	I1027 19:41:36.560462  565798 cri.go:89] found id: ""
	I1027 19:41:36.560492  565798 logs.go:282] 0 containers: []
	W1027 19:41:36.560504  565798 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:41:36.560512  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:41:36.560572  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:41:36.590892  565798 cri.go:89] found id: "38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:36.590915  565798 cri.go:89] found id: ""
	I1027 19:41:36.590925  565798 logs.go:282] 1 containers: [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77]
	I1027 19:41:36.590990  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:36.595427  565798 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:41:36.595508  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:41:36.625282  565798 cri.go:89] found id: ""
	I1027 19:41:36.625317  565798 logs.go:282] 0 containers: []
	W1027 19:41:36.625329  565798 logs.go:284] No container was found matching "kindnet"
	I1027 19:41:36.625337  565798 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:41:36.625387  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:41:36.654526  565798 cri.go:89] found id: ""
	I1027 19:41:36.654551  565798 logs.go:282] 0 containers: []
	W1027 19:41:36.654559  565798 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:41:36.654570  565798 logs.go:123] Gathering logs for container status ...
	I1027 19:41:36.654585  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:41:36.686830  565798 logs.go:123] Gathering logs for kubelet ...
	I1027 19:41:36.686863  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:41:36.773949  565798 logs.go:123] Gathering logs for dmesg ...
	I1027 19:41:36.773992  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:41:36.795686  565798 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:41:36.795715  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:41:36.869593  565798 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:41:36.869626  565798 logs.go:123] Gathering logs for kube-apiserver [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8] ...
	I1027 19:41:36.869642  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:41:36.904315  565798 logs.go:123] Gathering logs for kube-scheduler [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8] ...
	I1027 19:41:36.904350  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:41:36.955277  565798 logs.go:123] Gathering logs for kube-controller-manager [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77] ...
	I1027 19:41:36.955316  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:36.989612  565798 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:41:36.989642  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:41:39.538232  565798 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1027 19:41:40.038022  604470 cli_runner.go:164] Run: docker network inspect no-preload-095885 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 19:41:40.060438  604470 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1027 19:41:40.066124  604470 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 19:41:40.082925  604470 kubeadm.go:883] updating cluster {Name:no-preload-095885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-095885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 19:41:40.083064  604470 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:41:40.083105  604470 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 19:41:40.128492  604470 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 19:41:40.128525  604470 cache_images.go:85] Images are preloaded, skipping loading
	I1027 19:41:40.128535  604470 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1027 19:41:40.128679  604470 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-095885 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-095885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 19:41:40.128786  604470 ssh_runner.go:195] Run: crio config
	I1027 19:41:40.190906  604470 cni.go:84] Creating CNI manager for ""
	I1027 19:41:40.190946  604470 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:41:40.190977  604470 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 19:41:40.191009  604470 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-095885 NodeName:no-preload-095885 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 19:41:40.191306  604470 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-095885"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 19:41:40.191421  604470 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 19:41:40.203956  604470 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 19:41:40.204041  604470 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 19:41:40.215343  604470 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1027 19:41:40.233720  604470 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 19:41:40.252697  604470 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1027 19:41:40.272821  604470 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1027 19:41:40.278144  604470 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 19:41:40.291130  604470 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:41:40.409925  604470 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:41:40.442023  604470 certs.go:69] Setting up /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885 for IP: 192.168.76.2
	I1027 19:41:40.442046  604470 certs.go:195] generating shared ca certs ...
	I1027 19:41:40.442068  604470 certs.go:227] acquiring lock for ca certs: {Name:mk4bdbca32068f6f817fc35fdc496e961dc3e0d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:41:40.442266  604470 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21801-352833/.minikube/ca.key
	I1027 19:41:40.442349  604470 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.key
	I1027 19:41:40.442366  604470 certs.go:257] generating profile certs ...
	I1027 19:41:40.442471  604470 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/client.key
	I1027 19:41:40.442549  604470 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/apiserver.key.e3f5f1b4
	I1027 19:41:40.442592  604470 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/proxy-client.key
	I1027 19:41:40.442739  604470 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/356415.pem (1338 bytes)
	W1027 19:41:40.442783  604470 certs.go:480] ignoring /home/jenkins/minikube-integration/21801-352833/.minikube/certs/356415_empty.pem, impossibly tiny 0 bytes
	I1027 19:41:40.442797  604470 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 19:41:40.442829  604470 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem (1078 bytes)
	I1027 19:41:40.442860  604470 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem (1123 bytes)
	I1027 19:41:40.442893  604470 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem (1679 bytes)
	I1027 19:41:40.442943  604470 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem (1708 bytes)
	I1027 19:41:40.443783  604470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 19:41:40.472100  604470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 19:41:40.499393  604470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 19:41:40.537262  604470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1027 19:41:40.579913  604470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1027 19:41:40.608811  604470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 19:41:40.632386  604470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 19:41:40.656260  604470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 19:41:40.680265  604470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 19:41:40.705607  604470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/certs/356415.pem --> /usr/share/ca-certificates/356415.pem (1338 bytes)
	I1027 19:41:40.736685  604470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem --> /usr/share/ca-certificates/3564152.pem (1708 bytes)
	I1027 19:41:40.757100  604470 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 19:41:40.772409  604470 ssh_runner.go:195] Run: openssl version
	I1027 19:41:40.780365  604470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3564152.pem && ln -fs /usr/share/ca-certificates/3564152.pem /etc/ssl/certs/3564152.pem"
	I1027 19:41:40.793422  604470 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3564152.pem
	I1027 19:41:40.799200  604470 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 19:02 /usr/share/ca-certificates/3564152.pem
	I1027 19:41:40.799308  604470 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3564152.pem
	I1027 19:41:40.858798  604470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3564152.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 19:41:40.868961  604470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 19:41:40.880504  604470 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:41:40.885790  604470 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:41:40.885859  604470 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:41:40.924743  604470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 19:41:40.934706  604470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/356415.pem && ln -fs /usr/share/ca-certificates/356415.pem /etc/ssl/certs/356415.pem"
	I1027 19:41:40.946199  604470 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356415.pem
	I1027 19:41:40.950939  604470 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 19:02 /usr/share/ca-certificates/356415.pem
	I1027 19:41:40.951005  604470 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356415.pem
	I1027 19:41:41.000533  604470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/356415.pem /etc/ssl/certs/51391683.0"
	I1027 19:41:41.014358  604470 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 19:41:41.021053  604470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 19:41:41.084013  604470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 19:41:41.148741  604470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 19:41:41.218562  604470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 19:41:41.288867  604470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 19:41:41.353211  604470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 19:41:41.427442  604470 kubeadm.go:400] StartCluster: {Name:no-preload-095885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-095885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:41:41.427559  604470 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 19:41:41.427629  604470 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 19:41:41.468277  604470 cri.go:89] found id: "5cea35874d5acf206b55e45b05f38d78ea9509d27b883c670c280fce93719392"
	I1027 19:41:41.468307  604470 cri.go:89] found id: "6027c707b2e6435987becfbc61cef802217623f703bccb12bb5716bc98c873a9"
	I1027 19:41:41.468328  604470 cri.go:89] found id: "b35fe833b6d5250c5b516a89c49b8f3808e23967fa3f1a0150b2cd20ac6d55ea"
	I1027 19:41:41.468332  604470 cri.go:89] found id: "781c3a34fe9cc4350ebd3342ca9b66e12ce9f3e6795ee22c7d4ed1e31f9fcd7c"
	I1027 19:41:41.468337  604470 cri.go:89] found id: ""
	I1027 19:41:41.468385  604470 ssh_runner.go:195] Run: sudo runc list -f json
	W1027 19:41:41.496192  604470 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:41:41Z" level=error msg="open /run/runc: no such file or directory"
	I1027 19:41:41.496273  604470 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 19:41:41.519529  604470 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1027 19:41:41.519555  604470 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1027 19:41:41.519608  604470 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 19:41:41.538281  604470 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 19:41:41.539465  604470 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-095885" does not appear in /home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:41:41.540311  604470 kubeconfig.go:62] /home/jenkins/minikube-integration/21801-352833/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-095885" cluster setting kubeconfig missing "no-preload-095885" context setting]
	I1027 19:41:41.541309  604470 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/kubeconfig: {Name:mk24cbe512a6907c874f3fb7a85390a8f9fd2b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:41:41.543816  604470 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 19:41:41.556019  604470 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1027 19:41:41.556065  604470 kubeadm.go:601] duration metric: took 36.50344ms to restartPrimaryControlPlane
	I1027 19:41:41.556079  604470 kubeadm.go:402] duration metric: took 128.653659ms to StartCluster
	I1027 19:41:41.556104  604470 settings.go:142] acquiring lock: {Name:mk8304c2106bf310642e0949fc0266ccb50f2f95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:41:41.556210  604470 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:41:41.558163  604470 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/kubeconfig: {Name:mk24cbe512a6907c874f3fb7a85390a8f9fd2b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:41:41.558563  604470 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 19:41:41.558751  604470 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 19:41:41.558843  604470 config.go:182] Loaded profile config "no-preload-095885": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:41:41.558856  604470 addons.go:69] Setting storage-provisioner=true in profile "no-preload-095885"
	I1027 19:41:41.558874  604470 addons.go:238] Setting addon storage-provisioner=true in "no-preload-095885"
	W1027 19:41:41.558881  604470 addons.go:247] addon storage-provisioner should already be in state true
	I1027 19:41:41.558897  604470 addons.go:69] Setting default-storageclass=true in profile "no-preload-095885"
	I1027 19:41:41.558899  604470 addons.go:69] Setting dashboard=true in profile "no-preload-095885"
	I1027 19:41:41.558910  604470 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-095885"
	I1027 19:41:41.558913  604470 host.go:66] Checking if "no-preload-095885" exists ...
	I1027 19:41:41.558923  604470 addons.go:238] Setting addon dashboard=true in "no-preload-095885"
	W1027 19:41:41.558933  604470 addons.go:247] addon dashboard should already be in state true
	I1027 19:41:41.558968  604470 host.go:66] Checking if "no-preload-095885" exists ...
	I1027 19:41:41.559246  604470 cli_runner.go:164] Run: docker container inspect no-preload-095885 --format={{.State.Status}}
	I1027 19:41:41.559442  604470 cli_runner.go:164] Run: docker container inspect no-preload-095885 --format={{.State.Status}}
	I1027 19:41:41.559447  604470 cli_runner.go:164] Run: docker container inspect no-preload-095885 --format={{.State.Status}}
	I1027 19:41:41.560596  604470 out.go:179] * Verifying Kubernetes components...
	I1027 19:41:41.562119  604470 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:41:41.595253  604470 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:41:41.596954  604470 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:41:41.596978  604470 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 19:41:41.596977  604470 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1027 19:41:41.597040  604470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-095885
	I1027 19:41:41.599089  604470 addons.go:238] Setting addon default-storageclass=true in "no-preload-095885"
	W1027 19:41:41.599564  604470 addons.go:247] addon default-storageclass should already be in state true
	I1027 19:41:41.599660  604470 host.go:66] Checking if "no-preload-095885" exists ...
	I1027 19:41:41.600184  604470 cli_runner.go:164] Run: docker container inspect no-preload-095885 --format={{.State.Status}}
	I1027 19:41:41.603401  604470 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1027 19:41:38.300298  601731 out.go:252]   - Booting up control plane ...
	I1027 19:41:38.300417  601731 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 19:41:38.300494  601731 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 19:41:38.301269  601731 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 19:41:38.317615  601731 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 19:41:38.317796  601731 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 19:41:38.325393  601731 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 19:41:38.325697  601731 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 19:41:38.325760  601731 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1027 19:41:38.436333  601731 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 19:41:38.436537  601731 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 19:41:38.938161  601731 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 502.02341ms
	I1027 19:41:38.941231  601731 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 19:41:38.941369  601731 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1027 19:41:38.941482  601731 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 19:41:38.941555  601731 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 19:41:41.544505  601731 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.602250744s
	I1027 19:41:41.585376  601731 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.643972656s
	I1027 19:41:43.443119  601731 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.501736668s
	I1027 19:41:43.458876  601731 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 19:41:43.475075  601731 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 19:41:43.490047  601731 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 19:41:43.490469  601731 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-813397 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 19:41:43.505486  601731 kubeadm.go:318] [bootstrap-token] Using token: krqx3o.862otuv3ceo9vh3t
	I1027 19:41:41.604616  604470 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1027 19:41:41.604640  604470 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1027 19:41:41.604722  604470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-095885
	I1027 19:41:41.642076  604470 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 19:41:41.642102  604470 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 19:41:41.642128  604470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/no-preload-095885/id_rsa Username:docker}
	I1027 19:41:41.642178  604470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-095885
	I1027 19:41:41.649571  604470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/no-preload-095885/id_rsa Username:docker}
	I1027 19:41:41.669489  604470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/no-preload-095885/id_rsa Username:docker}
	I1027 19:41:41.769731  604470 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:41:41.800064  604470 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:41:41.803061  604470 node_ready.go:35] waiting up to 6m0s for node "no-preload-095885" to be "Ready" ...
	I1027 19:41:41.815648  604470 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1027 19:41:41.815682  604470 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1027 19:41:41.833776  604470 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1027 19:41:41.833808  604470 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1027 19:41:41.838523  604470 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 19:41:41.852112  604470 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1027 19:41:41.852172  604470 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1027 19:41:41.868978  604470 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1027 19:41:41.869012  604470 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1027 19:41:41.886677  604470 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1027 19:41:41.886718  604470 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1027 19:41:41.902981  604470 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1027 19:41:41.903014  604470 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1027 19:41:41.919453  604470 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1027 19:41:41.919481  604470 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1027 19:41:41.934909  604470 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1027 19:41:41.934944  604470 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1027 19:41:41.950310  604470 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 19:41:41.950339  604470 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1027 19:41:41.965316  604470 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 19:41:43.194555  604470 node_ready.go:49] node "no-preload-095885" is "Ready"
	I1027 19:41:43.194602  604470 node_ready.go:38] duration metric: took 1.391504473s for node "no-preload-095885" to be "Ready" ...
	I1027 19:41:43.194623  604470 api_server.go:52] waiting for apiserver process to appear ...
	I1027 19:41:43.194689  604470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:41:43.814690  604470 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.014578847s)
	I1027 19:41:43.814719  604470 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.976163054s)
	I1027 19:41:43.814864  604470 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.849510648s)
	I1027 19:41:43.814904  604470 api_server.go:72] duration metric: took 2.256298641s to wait for apiserver process to appear ...
	I1027 19:41:43.814916  604470 api_server.go:88] waiting for apiserver healthz status ...
	I1027 19:41:43.814944  604470 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 19:41:43.816784  604470 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-095885 addons enable metrics-server
	
	I1027 19:41:43.819948  604470 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 19:41:43.819980  604470 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 19:41:43.822914  604470 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1027 19:41:43.507088  601731 out.go:252]   - Configuring RBAC rules ...
	I1027 19:41:43.507275  601731 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 19:41:43.512036  601731 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 19:41:43.521909  601731 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 19:41:43.525338  601731 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 19:41:43.530291  601731 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 19:41:43.534973  601731 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 19:41:43.852323  601731 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 19:41:44.278701  601731 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1027 19:41:44.852103  601731 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1027 19:41:44.852155  601731 kubeadm.go:318] 
	I1027 19:41:44.852218  601731 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1027 19:41:44.852228  601731 kubeadm.go:318] 
	I1027 19:41:44.852323  601731 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1027 19:41:44.852334  601731 kubeadm.go:318] 
	I1027 19:41:44.852367  601731 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1027 19:41:44.852488  601731 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 19:41:44.852606  601731 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 19:41:44.852620  601731 kubeadm.go:318] 
	I1027 19:41:44.852745  601731 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1027 19:41:44.852764  601731 kubeadm.go:318] 
	I1027 19:41:44.852840  601731 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 19:41:44.852846  601731 kubeadm.go:318] 
	I1027 19:41:44.852918  601731 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1027 19:41:44.853072  601731 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 19:41:44.853189  601731 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 19:41:44.853199  601731 kubeadm.go:318] 
	I1027 19:41:44.853305  601731 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 19:41:44.853397  601731 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1027 19:41:44.853405  601731 kubeadm.go:318] 
	I1027 19:41:44.853501  601731 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token krqx3o.862otuv3ceo9vh3t \
	I1027 19:41:44.853623  601731 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ab29e81999671591f366788f5ae9ffb132789ebc71f7c0efdaecd38575a5ab6a \
	I1027 19:41:44.853652  601731 kubeadm.go:318] 	--control-plane 
	I1027 19:41:44.853660  601731 kubeadm.go:318] 
	I1027 19:41:44.853756  601731 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1027 19:41:44.853763  601731 kubeadm.go:318] 
	I1027 19:41:44.853857  601731 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token krqx3o.862otuv3ceo9vh3t \
	I1027 19:41:44.853980  601731 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ab29e81999671591f366788f5ae9ffb132789ebc71f7c0efdaecd38575a5ab6a 
	I1027 19:41:44.858070  601731 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1027 19:41:44.858260  601731 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 19:41:44.858298  601731 cni.go:84] Creating CNI manager for ""
	I1027 19:41:44.858315  601731 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:41:44.860093  601731 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1027 19:41:43.072036  594803 pod_ready.go:104] pod "coredns-66bc5c9577-9b9tz" is not "Ready", error: <nil>
	I1027 19:41:45.064331  594803 pod_ready.go:94] pod "coredns-66bc5c9577-9b9tz" is "Ready"
	I1027 19:41:45.064367  594803 pod_ready.go:86] duration metric: took 33.507313991s for pod "coredns-66bc5c9577-9b9tz" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:45.067917  594803 pod_ready.go:83] waiting for pod "etcd-embed-certs-919237" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:45.073097  594803 pod_ready.go:94] pod "etcd-embed-certs-919237" is "Ready"
	I1027 19:41:45.073166  594803 pod_ready.go:86] duration metric: took 5.183663ms for pod "etcd-embed-certs-919237" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:45.076002  594803 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-919237" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:45.081205  594803 pod_ready.go:94] pod "kube-apiserver-embed-certs-919237" is "Ready"
	I1027 19:41:45.081236  594803 pod_ready.go:86] duration metric: took 5.199151ms for pod "kube-apiserver-embed-certs-919237" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:45.083862  594803 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-919237" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:45.261326  594803 pod_ready.go:94] pod "kube-controller-manager-embed-certs-919237" is "Ready"
	I1027 19:41:45.261363  594803 pod_ready.go:86] duration metric: took 177.47609ms for pod "kube-controller-manager-embed-certs-919237" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:45.460944  594803 pod_ready.go:83] waiting for pod "kube-proxy-rrq2h" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:45.861056  594803 pod_ready.go:94] pod "kube-proxy-rrq2h" is "Ready"
	I1027 19:41:45.861085  594803 pod_ready.go:86] duration metric: took 400.103982ms for pod "kube-proxy-rrq2h" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:46.060781  594803 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-919237" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:46.460405  594803 pod_ready.go:94] pod "kube-scheduler-embed-certs-919237" is "Ready"
	I1027 19:41:46.460440  594803 pod_ready.go:86] duration metric: took 399.626731ms for pod "kube-scheduler-embed-certs-919237" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:46.460457  594803 pod_ready.go:40] duration metric: took 34.907882675s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 19:41:46.509120  594803 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 19:41:46.510744  594803 out.go:179] * Done! kubectl is now configured to use "embed-certs-919237" cluster and "default" namespace by default
	I1027 19:41:44.538618  565798 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1027 19:41:44.538685  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:41:44.538754  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:41:44.570896  565798 cri.go:89] found id: "ca67cda12e0adb415e229ae9e136a15743c92bb79ef8987bb33523c43775a99e"
	I1027 19:41:44.570916  565798 cri.go:89] found id: "f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:41:44.570920  565798 cri.go:89] found id: ""
	I1027 19:41:44.570928  565798 logs.go:282] 2 containers: [ca67cda12e0adb415e229ae9e136a15743c92bb79ef8987bb33523c43775a99e f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8]
	I1027 19:41:44.570991  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:44.575567  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:44.580108  565798 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:41:44.580192  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:41:44.610459  565798 cri.go:89] found id: ""
	I1027 19:41:44.610487  565798 logs.go:282] 0 containers: []
	W1027 19:41:44.610495  565798 logs.go:284] No container was found matching "etcd"
	I1027 19:41:44.610501  565798 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:41:44.610551  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:41:44.645683  565798 cri.go:89] found id: ""
	I1027 19:41:44.645709  565798 logs.go:282] 0 containers: []
	W1027 19:41:44.645718  565798 logs.go:284] No container was found matching "coredns"
	I1027 19:41:44.645724  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:41:44.645789  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:41:44.682434  565798 cri.go:89] found id: "15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:41:44.682460  565798 cri.go:89] found id: ""
	I1027 19:41:44.682470  565798 logs.go:282] 1 containers: [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8]
	I1027 19:41:44.682555  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:44.687961  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:41:44.688032  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:41:44.721808  565798 cri.go:89] found id: ""
	I1027 19:41:44.721840  565798 logs.go:282] 0 containers: []
	W1027 19:41:44.721853  565798 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:41:44.721862  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:41:44.721927  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:41:44.756857  565798 cri.go:89] found id: "4b0186426a494845ce9fa7af7755d0c2f9549f935b11a34bd738219dd3bfd4f5"
	I1027 19:41:44.756883  565798 cri.go:89] found id: "38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:44.756904  565798 cri.go:89] found id: ""
	I1027 19:41:44.756916  565798 logs.go:282] 2 containers: [4b0186426a494845ce9fa7af7755d0c2f9549f935b11a34bd738219dd3bfd4f5 38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77]
	I1027 19:41:44.756983  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:44.761788  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:44.766868  565798 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:41:44.766946  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:41:44.800248  565798 cri.go:89] found id: ""
	I1027 19:41:44.800279  565798 logs.go:282] 0 containers: []
	W1027 19:41:44.800315  565798 logs.go:284] No container was found matching "kindnet"
	I1027 19:41:44.800324  565798 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:41:44.800395  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:41:44.835666  565798 cri.go:89] found id: ""
	I1027 19:41:44.835706  565798 logs.go:282] 0 containers: []
	W1027 19:41:44.835717  565798 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:41:44.835734  565798 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:41:44.835749  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1027 19:41:44.861324  601731 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 19:41:44.866168  601731 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 19:41:44.866192  601731 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1027 19:41:44.881418  601731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 19:41:45.150403  601731 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 19:41:45.150477  601731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:41:45.150519  601731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-813397 minikube.k8s.io/updated_at=2025_10_27T19_41_45_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f minikube.k8s.io/name=default-k8s-diff-port-813397 minikube.k8s.io/primary=true
	I1027 19:41:45.161982  601731 ops.go:34] apiserver oom_adj: -16
	I1027 19:41:45.254791  601731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:41:45.754999  601731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:41:46.255609  601731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:41:46.755522  601731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:41:47.255889  601731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:41:47.755557  601731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:41:43.824287  604470 addons.go:514] duration metric: took 2.26556207s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1027 19:41:44.315425  604470 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 19:41:44.320715  604470 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 19:41:44.320752  604470 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 19:41:44.815353  604470 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 19:41:44.820262  604470 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1027 19:41:44.821434  604470 api_server.go:141] control plane version: v1.34.1
	I1027 19:41:44.821467  604470 api_server.go:131] duration metric: took 1.006539225s to wait for apiserver health ...
	I1027 19:41:44.821478  604470 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 19:41:44.825589  604470 system_pods.go:59] 8 kube-system pods found
	I1027 19:41:44.825638  604470 system_pods.go:61] "coredns-66bc5c9577-gwqvg" [3bcd75c1-f42f-4252-b1fc-2bdab3c8373e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 19:41:44.825647  604470 system_pods.go:61] "etcd-no-preload-095885" [398272ac-d5cc-44d6-bf2a-3469d316b417] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 19:41:44.825653  604470 system_pods.go:61] "kindnet-8lbz5" [42b05fb3-87d3-412f-ac73-cb73a737aab1] Running
	I1027 19:41:44.825660  604470 system_pods.go:61] "kube-apiserver-no-preload-095885" [d609db88-4097-43b5-b881-a445344edf64] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 19:41:44.825669  604470 system_pods.go:61] "kube-controller-manager-no-preload-095885" [b1bfd486-ed1f-4f8b-a08b-de7739f1dd9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 19:41:44.825678  604470 system_pods.go:61] "kube-proxy-wz64m" [339cb07c-5319-4d8b-ab61-a6d377c2bc61] Running
	I1027 19:41:44.825686  604470 system_pods.go:61] "kube-scheduler-no-preload-095885" [7ba1709a-c913-40f3-833b-bee63057ce6e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 19:41:44.825698  604470 system_pods.go:61] "storage-provisioner" [e8283562-be98-444b-b591-a0239860e729] Running
	I1027 19:41:44.825709  604470 system_pods.go:74] duration metric: took 4.221591ms to wait for pod list to return data ...
	I1027 19:41:44.825723  604470 default_sa.go:34] waiting for default service account to be created ...
	I1027 19:41:44.828240  604470 default_sa.go:45] found service account: "default"
	I1027 19:41:44.828270  604470 default_sa.go:55] duration metric: took 2.538409ms for default service account to be created ...
	I1027 19:41:44.828282  604470 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 19:41:44.926381  604470 system_pods.go:86] 8 kube-system pods found
	I1027 19:41:44.926413  604470 system_pods.go:89] "coredns-66bc5c9577-gwqvg" [3bcd75c1-f42f-4252-b1fc-2bdab3c8373e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 19:41:44.926422  604470 system_pods.go:89] "etcd-no-preload-095885" [398272ac-d5cc-44d6-bf2a-3469d316b417] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 19:41:44.926428  604470 system_pods.go:89] "kindnet-8lbz5" [42b05fb3-87d3-412f-ac73-cb73a737aab1] Running
	I1027 19:41:44.926434  604470 system_pods.go:89] "kube-apiserver-no-preload-095885" [d609db88-4097-43b5-b881-a445344edf64] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 19:41:44.926439  604470 system_pods.go:89] "kube-controller-manager-no-preload-095885" [b1bfd486-ed1f-4f8b-a08b-de7739f1dd9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 19:41:44.926451  604470 system_pods.go:89] "kube-proxy-wz64m" [339cb07c-5319-4d8b-ab61-a6d377c2bc61] Running
	I1027 19:41:44.926456  604470 system_pods.go:89] "kube-scheduler-no-preload-095885" [7ba1709a-c913-40f3-833b-bee63057ce6e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 19:41:44.926460  604470 system_pods.go:89] "storage-provisioner" [e8283562-be98-444b-b591-a0239860e729] Running
	I1027 19:41:44.926469  604470 system_pods.go:126] duration metric: took 98.179751ms to wait for k8s-apps to be running ...
	I1027 19:41:44.926480  604470 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 19:41:44.926529  604470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:41:44.941077  604470 system_svc.go:56] duration metric: took 14.581965ms WaitForService to wait for kubelet
	I1027 19:41:44.941113  604470 kubeadm.go:586] duration metric: took 3.382507903s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 19:41:44.941151  604470 node_conditions.go:102] verifying NodePressure condition ...
	I1027 19:41:44.946437  604470 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1027 19:41:44.946470  604470 node_conditions.go:123] node cpu capacity is 8
	I1027 19:41:44.946483  604470 node_conditions.go:105] duration metric: took 5.326508ms to run NodePressure ...
	I1027 19:41:44.946497  604470 start.go:241] waiting for startup goroutines ...
	I1027 19:41:44.946504  604470 start.go:246] waiting for cluster config update ...
	I1027 19:41:44.946514  604470 start.go:255] writing updated cluster config ...
	I1027 19:41:44.946761  604470 ssh_runner.go:195] Run: rm -f paused
	I1027 19:41:44.952271  604470 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 19:41:44.957117  604470 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gwqvg" in "kube-system" namespace to be "Ready" or be gone ...
	W1027 19:41:46.963263  604470 pod_ready.go:104] pod "coredns-66bc5c9577-gwqvg" is not "Ready", error: <nil>
	I1027 19:41:48.255289  601731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:41:48.755892  601731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:41:49.255340  601731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:41:49.333754  601731 kubeadm.go:1113] duration metric: took 4.183323316s to wait for elevateKubeSystemPrivileges
	I1027 19:41:49.333798  601731 kubeadm.go:402] duration metric: took 15.685937442s to StartCluster
	I1027 19:41:49.333821  601731 settings.go:142] acquiring lock: {Name:mk8304c2106bf310642e0949fc0266ccb50f2f95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:41:49.333908  601731 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:41:49.336376  601731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/kubeconfig: {Name:mk24cbe512a6907c874f3fb7a85390a8f9fd2b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:41:49.336733  601731 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 19:41:49.336753  601731 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 19:41:49.336768  601731 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 19:41:49.336883  601731 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-813397"
	I1027 19:41:49.336906  601731 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-813397"
	I1027 19:41:49.336944  601731 host.go:66] Checking if "default-k8s-diff-port-813397" exists ...
	I1027 19:41:49.336961  601731 config.go:182] Loaded profile config "default-k8s-diff-port-813397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:41:49.337020  601731 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-813397"
	I1027 19:41:49.337077  601731 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-813397"
	I1027 19:41:49.337585  601731 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-813397 --format={{.State.Status}}
	I1027 19:41:49.337601  601731 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-813397 --format={{.State.Status}}
	I1027 19:41:49.338721  601731 out.go:179] * Verifying Kubernetes components...
	I1027 19:41:49.340417  601731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:41:49.366581  601731 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:41:49.368484  601731 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:41:49.368512  601731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 19:41:49.368577  601731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-813397
	I1027 19:41:49.368993  601731 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-813397"
	I1027 19:41:49.369042  601731 host.go:66] Checking if "default-k8s-diff-port-813397" exists ...
	I1027 19:41:49.369588  601731 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-813397 --format={{.State.Status}}
	I1027 19:41:49.403359  601731 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 19:41:49.403384  601731 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 19:41:49.403449  601731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-813397
	I1027 19:41:49.404410  601731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/default-k8s-diff-port-813397/id_rsa Username:docker}
	I1027 19:41:49.428863  601731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/default-k8s-diff-port-813397/id_rsa Username:docker}
	I1027 19:41:49.444289  601731 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 19:41:49.509786  601731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:41:49.543593  601731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:41:49.558735  601731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 19:41:49.669901  601731 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-813397" to be "Ready" ...
	I1027 19:41:49.670465  601731 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1027 19:41:49.910815  601731 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1027 19:41:49.911962  601731 addons.go:514] duration metric: took 575.181626ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1027 19:41:50.176449  601731 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-813397" context rescaled to 1 replicas
	W1027 19:41:51.673949  601731 node_ready.go:57] node "default-k8s-diff-port-813397" has "Ready":"False" status (will retry)
	W1027 19:41:48.963530  604470 pod_ready.go:104] pod "coredns-66bc5c9577-gwqvg" is not "Ready", error: <nil>
	W1027 19:41:50.963609  604470 pod_ready.go:104] pod "coredns-66bc5c9577-gwqvg" is not "Ready", error: <nil>
	W1027 19:41:52.963991  604470 pod_ready.go:104] pod "coredns-66bc5c9577-gwqvg" is not "Ready", error: <nil>
	I1027 19:41:54.914575  565798 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.078798379s)
	W1027 19:41:54.914611  565798 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1027 19:41:54.914619  565798 logs.go:123] Gathering logs for kube-apiserver [ca67cda12e0adb415e229ae9e136a15743c92bb79ef8987bb33523c43775a99e] ...
	I1027 19:41:54.914633  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca67cda12e0adb415e229ae9e136a15743c92bb79ef8987bb33523c43775a99e"
	I1027 19:41:54.948527  565798 logs.go:123] Gathering logs for kube-apiserver [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8] ...
	I1027 19:41:54.948570  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:41:54.984187  565798 logs.go:123] Gathering logs for kube-controller-manager [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77] ...
	I1027 19:41:54.984223  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:55.013391  565798 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:41:55.013427  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:41:55.066061  565798 logs.go:123] Gathering logs for container status ...
	I1027 19:41:55.066107  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:41:55.099106  565798 logs.go:123] Gathering logs for kubelet ...
	I1027 19:41:55.099154  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:41:55.196824  565798 logs.go:123] Gathering logs for dmesg ...
	I1027 19:41:55.196863  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:41:55.217221  565798 logs.go:123] Gathering logs for kube-scheduler [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8] ...
	I1027 19:41:55.217262  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:41:55.269370  565798 logs.go:123] Gathering logs for kube-controller-manager [4b0186426a494845ce9fa7af7755d0c2f9549f935b11a34bd738219dd3bfd4f5] ...
	I1027 19:41:55.269416  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4b0186426a494845ce9fa7af7755d0c2f9549f935b11a34bd738219dd3bfd4f5"
	W1027 19:41:53.674287  601731 node_ready.go:57] node "default-k8s-diff-port-813397" has "Ready":"False" status (will retry)
	W1027 19:41:56.173594  601731 node_ready.go:57] node "default-k8s-diff-port-813397" has "Ready":"False" status (will retry)
	W1027 19:41:55.462229  604470 pod_ready.go:104] pod "coredns-66bc5c9577-gwqvg" is not "Ready", error: <nil>
	W1027 19:41:57.464007  604470 pod_ready.go:104] pod "coredns-66bc5c9577-gwqvg" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 27 19:41:33 embed-certs-919237 crio[562]: time="2025-10-27T19:41:33.633898461Z" level=info msg="Started container" PID=1741 containerID=f70805b0b88103b08166e7fb24c18ab35ac0ae9d3e987fd54ce24c8fe1b50a8f description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qb5z6/dashboard-metrics-scraper id=b1d90286-0d4c-47b8-b35e-e3af644f7cf7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1121eaa687445082ad2164e1d4dc89ed12615bcd2dd456384d547490ee0c7b81
	Oct 27 19:41:33 embed-certs-919237 crio[562]: time="2025-10-27T19:41:33.68140667Z" level=info msg="Removing container: 607816533ca5535179033ea14ae82c8f1c3039cada24e488c97062628661396f" id=70a0873f-da21-4f05-a522-539b9cb28127 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 19:41:33 embed-certs-919237 crio[562]: time="2025-10-27T19:41:33.696725047Z" level=info msg="Removed container 607816533ca5535179033ea14ae82c8f1c3039cada24e488c97062628661396f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qb5z6/dashboard-metrics-scraper" id=70a0873f-da21-4f05-a522-539b9cb28127 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 19:41:41 embed-certs-919237 crio[562]: time="2025-10-27T19:41:41.706583157Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=7edb807b-576a-46af-839b-32a167546bea name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:41:41 embed-certs-919237 crio[562]: time="2025-10-27T19:41:41.708119221Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=fed2c443-d141-4132-b8e6-e09560cb6b80 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:41:41 embed-certs-919237 crio[562]: time="2025-10-27T19:41:41.71015571Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b10b0aec-5893-4c36-b496-e7cdcea0e1df name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:41:41 embed-certs-919237 crio[562]: time="2025-10-27T19:41:41.710379932Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:41:41 embed-certs-919237 crio[562]: time="2025-10-27T19:41:41.719496525Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:41:41 embed-certs-919237 crio[562]: time="2025-10-27T19:41:41.719738577Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/5ff25519adabaf3f994071fdc4fd8066ef7900d1fb52a28fcf21a8fd6089bc16/merged/etc/passwd: no such file or directory"
	Oct 27 19:41:41 embed-certs-919237 crio[562]: time="2025-10-27T19:41:41.719785172Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5ff25519adabaf3f994071fdc4fd8066ef7900d1fb52a28fcf21a8fd6089bc16/merged/etc/group: no such file or directory"
	Oct 27 19:41:41 embed-certs-919237 crio[562]: time="2025-10-27T19:41:41.720120631Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:41:41 embed-certs-919237 crio[562]: time="2025-10-27T19:41:41.764562884Z" level=info msg="Created container 039af7dcecc8a433ded3d11e5ded2256d549ee2d08a3ebb68b26fce310e7bc20: kube-system/storage-provisioner/storage-provisioner" id=b10b0aec-5893-4c36-b496-e7cdcea0e1df name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:41:41 embed-certs-919237 crio[562]: time="2025-10-27T19:41:41.766574379Z" level=info msg="Starting container: 039af7dcecc8a433ded3d11e5ded2256d549ee2d08a3ebb68b26fce310e7bc20" id=0ed0e154-2d70-4eef-9c41-afb3c14df8de name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:41:41 embed-certs-919237 crio[562]: time="2025-10-27T19:41:41.769813639Z" level=info msg="Started container" PID=1755 containerID=039af7dcecc8a433ded3d11e5ded2256d549ee2d08a3ebb68b26fce310e7bc20 description=kube-system/storage-provisioner/storage-provisioner id=0ed0e154-2d70-4eef-9c41-afb3c14df8de name=/runtime.v1.RuntimeService/StartContainer sandboxID=4e5e19a9b8e1f5a7f24e4acbb89c648fc78cb8cb1c6415f77ef836545f40a990
	Oct 27 19:41:56 embed-certs-919237 crio[562]: time="2025-10-27T19:41:56.572516552Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3c84f4fd-b22d-437a-a954-3c0c53bace92 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:41:56 embed-certs-919237 crio[562]: time="2025-10-27T19:41:56.573797233Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d6bca718-82fc-4cae-ba30-2389428a467e name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:41:56 embed-certs-919237 crio[562]: time="2025-10-27T19:41:56.575036695Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qb5z6/dashboard-metrics-scraper" id=40aa1a56-9beb-45a7-b8e3-ee909c2e390b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:41:56 embed-certs-919237 crio[562]: time="2025-10-27T19:41:56.575206032Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:41:56 embed-certs-919237 crio[562]: time="2025-10-27T19:41:56.582199622Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:41:56 embed-certs-919237 crio[562]: time="2025-10-27T19:41:56.582926683Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:41:56 embed-certs-919237 crio[562]: time="2025-10-27T19:41:56.613979854Z" level=info msg="Created container 2796a5fed0754fd4b112fae38588dfe25b86705e56508393208766dc3b088d33: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qb5z6/dashboard-metrics-scraper" id=40aa1a56-9beb-45a7-b8e3-ee909c2e390b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:41:56 embed-certs-919237 crio[562]: time="2025-10-27T19:41:56.614809567Z" level=info msg="Starting container: 2796a5fed0754fd4b112fae38588dfe25b86705e56508393208766dc3b088d33" id=808d8052-d27b-4694-b551-0128bb25d4e1 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:41:56 embed-certs-919237 crio[562]: time="2025-10-27T19:41:56.616670453Z" level=info msg="Started container" PID=1788 containerID=2796a5fed0754fd4b112fae38588dfe25b86705e56508393208766dc3b088d33 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qb5z6/dashboard-metrics-scraper id=808d8052-d27b-4694-b551-0128bb25d4e1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1121eaa687445082ad2164e1d4dc89ed12615bcd2dd456384d547490ee0c7b81
	Oct 27 19:41:56 embed-certs-919237 crio[562]: time="2025-10-27T19:41:56.751448703Z" level=info msg="Removing container: f70805b0b88103b08166e7fb24c18ab35ac0ae9d3e987fd54ce24c8fe1b50a8f" id=710e56e9-257a-4d75-acf2-8240a5659b13 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 19:41:56 embed-certs-919237 crio[562]: time="2025-10-27T19:41:56.764683393Z" level=info msg="Removed container f70805b0b88103b08166e7fb24c18ab35ac0ae9d3e987fd54ce24c8fe1b50a8f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qb5z6/dashboard-metrics-scraper" id=710e56e9-257a-4d75-acf2-8240a5659b13 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	2796a5fed0754       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           4 seconds ago       Exited              dashboard-metrics-scraper   3                   1121eaa687445       dashboard-metrics-scraper-6ffb444bf9-qb5z6   kubernetes-dashboard
	039af7dcecc8a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   4e5e19a9b8e1f       storage-provisioner                          kube-system
	121601c64b1f8       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   40 seconds ago      Running             kubernetes-dashboard        0                   5bca0b94b7119       kubernetes-dashboard-855c9754f9-sctm4        kubernetes-dashboard
	7e47ca072fa91       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           50 seconds ago      Running             coredns                     0                   6b7bb63d45217       coredns-66bc5c9577-9b9tz                     kube-system
	6311ca5e86acb       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   137559fcc7bae       busybox                                      default
	289d461e95e5c       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   fad710d9d64d2       kindnet-6jx4q                                kube-system
	11808765eb85f       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           50 seconds ago      Running             kube-proxy                  0                   c7e57c3fd7398       kube-proxy-rrq2h                             kube-system
	ae6c32d15d0a3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   4e5e19a9b8e1f       storage-provisioner                          kube-system
	d5a5c65a74b4b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           53 seconds ago      Running             etcd                        0                   25c09d0d6cb26       etcd-embed-certs-919237                      kube-system
	f0dcb6f33c4a1       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           53 seconds ago      Running             kube-controller-manager     0                   6de95d026b3ce       kube-controller-manager-embed-certs-919237   kube-system
	d17bd312e4c2b       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           53 seconds ago      Running             kube-scheduler              0                   e0f4890391b83       kube-scheduler-embed-certs-919237            kube-system
	31682e1eceede       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           53 seconds ago      Running             kube-apiserver              0                   8f05230bb8da1       kube-apiserver-embed-certs-919237            kube-system
	
	
	==> coredns [7e47ca072fa9116cec1fe31e6e1e2cc19a4993f2a1a0cb5170d906761e491b77] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34662 - 15118 "HINFO IN 955905167667149728.6821744566514543240. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.062006688s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-919237
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-919237
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=embed-certs-919237
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T19_40_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 19:40:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-919237
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 19:41:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 19:41:40 +0000   Mon, 27 Oct 2025 19:40:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 19:41:40 +0000   Mon, 27 Oct 2025 19:40:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 19:41:40 +0000   Mon, 27 Oct 2025 19:40:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 19:41:40 +0000   Mon, 27 Oct 2025 19:40:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-919237
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                2eeadcca-8dc6-4ff3-aae9-45c8a87361ee
	  Boot ID:                    811bd29c-e64e-4acc-9427-bab1f7caed93
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-66bc5c9577-9b9tz                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     102s
	  kube-system                 etcd-embed-certs-919237                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         108s
	  kube-system                 kindnet-6jx4q                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      102s
	  kube-system                 kube-apiserver-embed-certs-919237             250m (3%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-controller-manager-embed-certs-919237    200m (2%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-proxy-rrq2h                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-scheduler-embed-certs-919237             100m (1%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-qb5z6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-sctm4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 101s               kube-proxy       
	  Normal  Starting                 50s                kube-proxy       
	  Normal  NodeHasSufficientMemory  108s               kubelet          Node embed-certs-919237 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    108s               kubelet          Node embed-certs-919237 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     108s               kubelet          Node embed-certs-919237 status is now: NodeHasSufficientPID
	  Normal  Starting                 108s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           103s               node-controller  Node embed-certs-919237 event: Registered Node embed-certs-919237 in Controller
	  Normal  NodeReady                91s                kubelet          Node embed-certs-919237 status is now: NodeReady
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)  kubelet          Node embed-certs-919237 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)  kubelet          Node embed-certs-919237 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)  kubelet          Node embed-certs-919237 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                node-controller  Node embed-certs-919237 event: Registered Node embed-certs-919237 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 23 52 43 9a ba 08 06
	[  +0.000398] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 50 95 0e df 53 08 06
	[Oct27 18:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.017295] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023893] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023882] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023851] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +2.047849] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +4.031592] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +8.319143] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[ +16.382183] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[Oct27 19:00] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	
	
	==> etcd [d5a5c65a74b4b0bac782941ddf5cfc5e1c95eb29dbc563a89bc74143a3d75be8] <==
	{"level":"warn","ts":"2025-10-27T19:41:09.408337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:09.414616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:09.421711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:09.427986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:09.434622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:09.440869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:09.447427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:09.460694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:09.467006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:09.473982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:09.480851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:09.494018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:09.502213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:09.512788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:09.555193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60468","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-27T19:41:27.443077Z","caller":"traceutil/trace.go:172","msg":"trace[1490949874] transaction","detail":"{read_only:false; response_revision:593; number_of_response:1; }","duration":"127.606157ms","start":"2025-10-27T19:41:27.315448Z","end":"2025-10-27T19:41:27.443054Z","steps":["trace[1490949874] 'process raft request'  (duration: 127.408327ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T19:41:27.736953Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"177.870522ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-9b9tz\" limit:1 ","response":"range_response_count:1 size:5934"}
	{"level":"info","ts":"2025-10-27T19:41:27.737039Z","caller":"traceutil/trace.go:172","msg":"trace[355912980] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-9b9tz; range_end:; response_count:1; response_revision:593; }","duration":"177.988219ms","start":"2025-10-27T19:41:27.559037Z","end":"2025-10-27T19:41:27.737025Z","steps":["trace[355912980] 'range keys from in-memory index tree'  (duration: 177.728439ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T19:41:28.112112Z","caller":"traceutil/trace.go:172","msg":"trace[668865601] linearizableReadLoop","detail":"{readStateIndex:623; appliedIndex:623; }","duration":"181.011244ms","start":"2025-10-27T19:41:27.931067Z","end":"2025-10-27T19:41:28.112078Z","steps":["trace[668865601] 'read index received'  (duration: 180.996694ms)","trace[668865601] 'applied index is now lower than readState.Index'  (duration: 12.78µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T19:41:28.112245Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"181.156974ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T19:41:28.112299Z","caller":"traceutil/trace.go:172","msg":"trace[1292005042] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:593; }","duration":"181.227992ms","start":"2025-10-27T19:41:27.931055Z","end":"2025-10-27T19:41:28.112283Z","steps":["trace[1292005042] 'agreement among raft nodes before linearized reading'  (duration: 181.108114ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T19:41:28.112382Z","caller":"traceutil/trace.go:172","msg":"trace[1930864654] transaction","detail":"{read_only:false; response_revision:594; number_of_response:1; }","duration":"222.973753ms","start":"2025-10-27T19:41:27.889397Z","end":"2025-10-27T19:41:28.112371Z","steps":["trace[1930864654] 'process raft request'  (duration: 222.783092ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T19:41:28.324913Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.986421ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T19:41:28.325012Z","caller":"traceutil/trace.go:172","msg":"trace[862790909] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:595; }","duration":"115.094657ms","start":"2025-10-27T19:41:28.209899Z","end":"2025-10-27T19:41:28.324993Z","steps":["trace[862790909] 'agreement among raft nodes before linearized reading'  (duration: 84.950967ms)","trace[862790909] 'range keys from in-memory index tree'  (duration: 30.012146ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-27T19:41:28.325072Z","caller":"traceutil/trace.go:172","msg":"trace[420978482] transaction","detail":"{read_only:false; response_revision:596; number_of_response:1; }","duration":"173.831689ms","start":"2025-10-27T19:41:28.151221Z","end":"2025-10-27T19:41:28.325053Z","steps":["trace[420978482] 'process raft request'  (duration: 143.678051ms)","trace[420978482] 'compare'  (duration: 30.036536ms)"],"step_count":2}
	
	
	==> kernel <==
	 19:42:01 up  2:24,  0 user,  load average: 4.75, 3.45, 2.20
	Linux embed-certs-919237 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [289d461e95e5c9245c97d39c39a8fdc2ca0d89a5aaf6adc05990cee406a99fc5] <==
	I1027 19:41:11.156789       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 19:41:11.157056       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1027 19:41:11.157297       1 main.go:148] setting mtu 1500 for CNI 
	I1027 19:41:11.157321       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 19:41:11.157356       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T19:41:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 19:41:11.358633       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 19:41:11.359863       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 19:41:11.359904       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 19:41:11.360017       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1027 19:41:11.814215       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 19:41:11.814243       1 metrics.go:72] Registering metrics
	I1027 19:41:11.814330       1 controller.go:711] "Syncing nftables rules"
	I1027 19:41:21.358406       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1027 19:41:21.358473       1 main.go:301] handling current node
	I1027 19:41:31.360276       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1027 19:41:31.360332       1 main.go:301] handling current node
	I1027 19:41:41.359353       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1027 19:41:41.359398       1 main.go:301] handling current node
	I1027 19:41:51.364213       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1027 19:41:51.364263       1 main.go:301] handling current node
	I1027 19:42:01.362248       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1027 19:42:01.362281       1 main.go:301] handling current node
	
	
	==> kube-apiserver [31682e1eceede1979fd31aa2e96a71541d29f7d036de012b0c0a406025482670] <==
	I1027 19:41:10.037965       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1027 19:41:10.038245       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1027 19:41:10.038272       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1027 19:41:10.038371       1 aggregator.go:171] initial CRD sync complete...
	I1027 19:41:10.038380       1 autoregister_controller.go:144] Starting autoregister controller
	I1027 19:41:10.038386       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 19:41:10.038392       1 cache.go:39] Caches are synced for autoregister controller
	I1027 19:41:10.044601       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 19:41:10.045913       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1027 19:41:10.056368       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1027 19:41:10.056409       1 policy_source.go:240] refreshing policies
	I1027 19:41:10.075919       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 19:41:10.089623       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 19:41:10.305951       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 19:41:10.338077       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 19:41:10.360039       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 19:41:10.370474       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 19:41:10.379070       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 19:41:10.414697       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.121.163"}
	I1027 19:41:10.427826       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.242.249"}
	I1027 19:41:10.941920       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 19:41:13.828682       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 19:41:13.875561       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 19:41:13.875560       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 19:41:13.924993       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [f0dcb6f33c4a16c8aabf1c9522c219dfe57ce0438d6eedb8d11b3bbed06bf220] <==
	I1027 19:41:13.359395       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1027 19:41:13.359403       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1027 19:41:13.361512       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1027 19:41:13.364768       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 19:41:13.371985       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 19:41:13.372015       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 19:41:13.372051       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 19:41:13.372101       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 19:41:13.372126       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 19:41:13.372196       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 19:41:13.372307       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 19:41:13.372398       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 19:41:13.372414       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-919237"
	I1027 19:41:13.372468       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1027 19:41:13.372486       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 19:41:13.374032       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1027 19:41:13.376295       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1027 19:41:13.376367       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 19:41:13.376382       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 19:41:13.376394       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1027 19:41:13.378087       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 19:41:13.378183       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 19:41:13.378396       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 19:41:13.395928       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 19:41:13.407997       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	
	
	==> kube-proxy [11808765eb85f990868220937b5849982fa806cf6e9924886c92e66e31f11278] <==
	I1027 19:41:10.970176       1 server_linux.go:53] "Using iptables proxy"
	I1027 19:41:11.041597       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 19:41:11.142128       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 19:41:11.142175       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1027 19:41:11.142270       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 19:41:11.164955       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 19:41:11.165035       1 server_linux.go:132] "Using iptables Proxier"
	I1027 19:41:11.171471       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 19:41:11.172053       1 server.go:527] "Version info" version="v1.34.1"
	I1027 19:41:11.172115       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:41:11.174124       1 config.go:200] "Starting service config controller"
	I1027 19:41:11.174716       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 19:41:11.174211       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 19:41:11.174747       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 19:41:11.174238       1 config.go:106] "Starting endpoint slice config controller"
	I1027 19:41:11.174770       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 19:41:11.174490       1 config.go:309] "Starting node config controller"
	I1027 19:41:11.174780       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 19:41:11.174786       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 19:41:11.274923       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 19:41:11.274942       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 19:41:11.274986       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [d17bd312e4c2b6e68ce5e1c0006ad10d3d74b77c3bc3e8570e4526763c6914a9] <==
	I1027 19:41:08.557058       1 serving.go:386] Generated self-signed cert in-memory
	W1027 19:41:09.963464       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1027 19:41:09.963499       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1027 19:41:09.963523       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1027 19:41:09.963534       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1027 19:41:10.005975       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 19:41:10.006008       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:41:10.015388       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 19:41:10.015988       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:41:10.016045       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:41:10.016096       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1027 19:41:10.019612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1027 19:41:10.116229       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 19:41:14 embed-certs-919237 kubelet[720]: I1027 19:41:14.883222     720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 27 19:41:17 embed-certs-919237 kubelet[720]: I1027 19:41:17.625944     720 scope.go:117] "RemoveContainer" containerID="0a9341ea4c1d6d89534690aa36d40f6987355ccc1e64e5063dca8b719048370c"
	Oct 27 19:41:18 embed-certs-919237 kubelet[720]: I1027 19:41:18.631128     720 scope.go:117] "RemoveContainer" containerID="0a9341ea4c1d6d89534690aa36d40f6987355ccc1e64e5063dca8b719048370c"
	Oct 27 19:41:18 embed-certs-919237 kubelet[720]: I1027 19:41:18.631296     720 scope.go:117] "RemoveContainer" containerID="607816533ca5535179033ea14ae82c8f1c3039cada24e488c97062628661396f"
	Oct 27 19:41:18 embed-certs-919237 kubelet[720]: E1027 19:41:18.631494     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qb5z6_kubernetes-dashboard(d40c29c2-2116-4b6c-bb4b-3fceda111717)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qb5z6" podUID="d40c29c2-2116-4b6c-bb4b-3fceda111717"
	Oct 27 19:41:19 embed-certs-919237 kubelet[720]: I1027 19:41:19.636185     720 scope.go:117] "RemoveContainer" containerID="607816533ca5535179033ea14ae82c8f1c3039cada24e488c97062628661396f"
	Oct 27 19:41:19 embed-certs-919237 kubelet[720]: E1027 19:41:19.636386     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qb5z6_kubernetes-dashboard(d40c29c2-2116-4b6c-bb4b-3fceda111717)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qb5z6" podUID="d40c29c2-2116-4b6c-bb4b-3fceda111717"
	Oct 27 19:41:21 embed-certs-919237 kubelet[720]: I1027 19:41:21.673860     720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-sctm4" podStartSLOduration=2.09079402 podStartE2EDuration="8.673836702s" podCreationTimestamp="2025-10-27 19:41:13 +0000 UTC" firstStartedPulling="2025-10-27 19:41:14.359582828 +0000 UTC m=+6.892491790" lastFinishedPulling="2025-10-27 19:41:20.942625499 +0000 UTC m=+13.475534472" observedRunningTime="2025-10-27 19:41:21.673503947 +0000 UTC m=+14.206412928" watchObservedRunningTime="2025-10-27 19:41:21.673836702 +0000 UTC m=+14.206745698"
	Oct 27 19:41:22 embed-certs-919237 kubelet[720]: I1027 19:41:22.314436     720 scope.go:117] "RemoveContainer" containerID="607816533ca5535179033ea14ae82c8f1c3039cada24e488c97062628661396f"
	Oct 27 19:41:22 embed-certs-919237 kubelet[720]: E1027 19:41:22.314661     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qb5z6_kubernetes-dashboard(d40c29c2-2116-4b6c-bb4b-3fceda111717)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qb5z6" podUID="d40c29c2-2116-4b6c-bb4b-3fceda111717"
	Oct 27 19:41:33 embed-certs-919237 kubelet[720]: I1027 19:41:33.571739     720 scope.go:117] "RemoveContainer" containerID="607816533ca5535179033ea14ae82c8f1c3039cada24e488c97062628661396f"
	Oct 27 19:41:33 embed-certs-919237 kubelet[720]: I1027 19:41:33.679012     720 scope.go:117] "RemoveContainer" containerID="607816533ca5535179033ea14ae82c8f1c3039cada24e488c97062628661396f"
	Oct 27 19:41:33 embed-certs-919237 kubelet[720]: I1027 19:41:33.679317     720 scope.go:117] "RemoveContainer" containerID="f70805b0b88103b08166e7fb24c18ab35ac0ae9d3e987fd54ce24c8fe1b50a8f"
	Oct 27 19:41:33 embed-certs-919237 kubelet[720]: E1027 19:41:33.679533     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qb5z6_kubernetes-dashboard(d40c29c2-2116-4b6c-bb4b-3fceda111717)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qb5z6" podUID="d40c29c2-2116-4b6c-bb4b-3fceda111717"
	Oct 27 19:41:41 embed-certs-919237 kubelet[720]: I1027 19:41:41.705504     720 scope.go:117] "RemoveContainer" containerID="ae6c32d15d0a354896e509d903d2913f4e4cb318fee7570b0a381a4da1276a5b"
	Oct 27 19:41:42 embed-certs-919237 kubelet[720]: I1027 19:41:42.315220     720 scope.go:117] "RemoveContainer" containerID="f70805b0b88103b08166e7fb24c18ab35ac0ae9d3e987fd54ce24c8fe1b50a8f"
	Oct 27 19:41:42 embed-certs-919237 kubelet[720]: E1027 19:41:42.315441     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qb5z6_kubernetes-dashboard(d40c29c2-2116-4b6c-bb4b-3fceda111717)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qb5z6" podUID="d40c29c2-2116-4b6c-bb4b-3fceda111717"
	Oct 27 19:41:56 embed-certs-919237 kubelet[720]: I1027 19:41:56.571757     720 scope.go:117] "RemoveContainer" containerID="f70805b0b88103b08166e7fb24c18ab35ac0ae9d3e987fd54ce24c8fe1b50a8f"
	Oct 27 19:41:56 embed-certs-919237 kubelet[720]: I1027 19:41:56.749969     720 scope.go:117] "RemoveContainer" containerID="f70805b0b88103b08166e7fb24c18ab35ac0ae9d3e987fd54ce24c8fe1b50a8f"
	Oct 27 19:41:56 embed-certs-919237 kubelet[720]: I1027 19:41:56.750258     720 scope.go:117] "RemoveContainer" containerID="2796a5fed0754fd4b112fae38588dfe25b86705e56508393208766dc3b088d33"
	Oct 27 19:41:56 embed-certs-919237 kubelet[720]: E1027 19:41:56.750495     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qb5z6_kubernetes-dashboard(d40c29c2-2116-4b6c-bb4b-3fceda111717)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qb5z6" podUID="d40c29c2-2116-4b6c-bb4b-3fceda111717"
	Oct 27 19:41:58 embed-certs-919237 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 19:41:58 embed-certs-919237 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 19:41:58 embed-certs-919237 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 27 19:41:58 embed-certs-919237 systemd[1]: kubelet.service: Consumed 1.834s CPU time.
	
	
	==> kubernetes-dashboard [121601c64b1f8275f26411958ad9a6732beea758cb85fefc8db2ea3c291abd87] <==
	2025/10/27 19:41:21 Using namespace: kubernetes-dashboard
	2025/10/27 19:41:21 Using in-cluster config to connect to apiserver
	2025/10/27 19:41:21 Using secret token for csrf signing
	2025/10/27 19:41:21 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/27 19:41:21 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/27 19:41:21 Successful initial request to the apiserver, version: v1.34.1
	2025/10/27 19:41:21 Generating JWE encryption key
	2025/10/27 19:41:21 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/27 19:41:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/27 19:41:21 Initializing JWE encryption key from synchronized object
	2025/10/27 19:41:21 Creating in-cluster Sidecar client
	2025/10/27 19:41:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 19:41:21 Serving insecurely on HTTP port: 9090
	2025/10/27 19:41:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 19:41:21 Starting overwatch
	
	
	==> storage-provisioner [039af7dcecc8a433ded3d11e5ded2256d549ee2d08a3ebb68b26fce310e7bc20] <==
	I1027 19:41:41.788478       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1027 19:41:41.803885       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 19:41:41.803939       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1027 19:41:41.806865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:41:45.263531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:41:49.528191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:41:53.126241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:41:56.179485       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:41:59.201974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:41:59.210894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 19:41:59.211044       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 19:41:59.211156       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ea57f8f9-31a7-4033-9918-213289abc41f", APIVersion:"v1", ResourceVersion:"630", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-919237_524297ae-b48b-4840-a52f-029d1cfb1769 became leader
	I1027 19:41:59.211253       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-919237_524297ae-b48b-4840-a52f-029d1cfb1769!
	W1027 19:41:59.215584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:41:59.221190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 19:41:59.311597       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-919237_524297ae-b48b-4840-a52f-029d1cfb1769!
	W1027 19:42:01.224608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:42:01.232173       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ae6c32d15d0a354896e509d903d2913f4e4cb318fee7570b0a381a4da1276a5b] <==
	I1027 19:41:10.926489       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1027 19:41:40.932573       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-919237 -n embed-certs-919237
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-919237 -n embed-certs-919237: exit status 2 (389.330947ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-919237 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-919237
helpers_test.go:243: (dbg) docker inspect embed-certs-919237:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "37808aa2dc4c4127748e535c42c1ec4333eeed40f14d98040de3f085b9d38b11",
	        "Created": "2025-10-27T19:39:55.06890143Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 595076,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T19:41:01.094759341Z",
	            "FinishedAt": "2025-10-27T19:40:59.997815947Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/37808aa2dc4c4127748e535c42c1ec4333eeed40f14d98040de3f085b9d38b11/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/37808aa2dc4c4127748e535c42c1ec4333eeed40f14d98040de3f085b9d38b11/hostname",
	        "HostsPath": "/var/lib/docker/containers/37808aa2dc4c4127748e535c42c1ec4333eeed40f14d98040de3f085b9d38b11/hosts",
	        "LogPath": "/var/lib/docker/containers/37808aa2dc4c4127748e535c42c1ec4333eeed40f14d98040de3f085b9d38b11/37808aa2dc4c4127748e535c42c1ec4333eeed40f14d98040de3f085b9d38b11-json.log",
	        "Name": "/embed-certs-919237",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-919237:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-919237",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "37808aa2dc4c4127748e535c42c1ec4333eeed40f14d98040de3f085b9d38b11",
	                "LowerDir": "/var/lib/docker/overlay2/1a197dc40b03763e74d9e2a466d399c472fd8d02996bb7655be8275cee948408-init/diff:/var/lib/docker/overlay2/71b61ec94610a35f2d924dec358052d4c154c36b3fe219802f60246ca2dc7f45/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1a197dc40b03763e74d9e2a466d399c472fd8d02996bb7655be8275cee948408/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1a197dc40b03763e74d9e2a466d399c472fd8d02996bb7655be8275cee948408/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1a197dc40b03763e74d9e2a466d399c472fd8d02996bb7655be8275cee948408/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-919237",
	                "Source": "/var/lib/docker/volumes/embed-certs-919237/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-919237",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-919237",
	                "name.minikube.sigs.k8s.io": "embed-certs-919237",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "25e7f0ae99fb61ccb55e65b521f4a1429e4fc658c4e3437bc5de7a9bbaa40a2a",
	            "SandboxKey": "/var/run/docker/netns/25e7f0ae99fb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-919237": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d6:83:26:8b:b3:ca",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "999393307eef706ac69479cce1c654e615bbf1533042b5bf717c2605b3087cda",
	                    "EndpointID": "b08e9f9071cbcc8b4abf81b36718fc0b0c73b18c70ca41a4a70b65f312907880",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-919237",
	                        "37808aa2dc4c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-919237 -n embed-certs-919237
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-919237 -n embed-certs-919237: exit status 2 (343.554069ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-919237 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-919237 logs -n 25: (1.224791606s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p functional-051715 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio                                                                │ functional-051715            │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │                     │
	│ start   │ -p functional-051715 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                          │ functional-051715            │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │                     │
	│ addons  │ functional-051715 addons list                                                                                                                                            │ functional-051715            │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ addons  │ functional-051715 addons list -o json                                                                                                                                    │ functional-051715            │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image   │ functional-051715 image load --daemon kicbase/echo-server:functional-051715 --alsologtostderr                                                                            │ functional-051715            │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image   │ functional-051715 image ls                                                                                                                                               │ functional-051715            │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image   │ functional-051715 image load --daemon kicbase/echo-server:functional-051715 --alsologtostderr                                                                            │ functional-051715            │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image   │ functional-051715 image ls                                                                                                                                               │ functional-051715            │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image   │ functional-051715 image load --daemon kicbase/echo-server:functional-051715 --alsologtostderr                                                                            │ functional-051715            │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image   │ functional-051715 image ls                                                                                                                                               │ functional-051715            │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image   │ functional-051715 image save kicbase/echo-server:functional-051715 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr          │ functional-051715            │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image   │ functional-051715 image rm kicbase/echo-server:functional-051715 --alsologtostderr                                                                                       │ functional-051715            │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ addons  │ enable dashboard -p embed-certs-919237 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ start   │ -p embed-certs-919237 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ image   │ old-k8s-version-468959 image list --format=json                                                                                                                          │ old-k8s-version-468959       │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ pause   │ -p old-k8s-version-468959 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-468959       │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-095885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │                     │
	│ stop    │ -p no-preload-095885 --alsologtostderr -v=3                                                                                                                              │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ delete  │ -p old-k8s-version-468959                                                                                                                                                │ old-k8s-version-468959       │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ delete  │ -p old-k8s-version-468959                                                                                                                                                │ old-k8s-version-468959       │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ start   │ -p default-k8s-diff-port-813397 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-813397 │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-095885 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ start   │ -p no-preload-095885 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │                     │
	│ image   │ embed-certs-919237 image list --format=json                                                                                                                              │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ pause   │ -p embed-certs-919237 --alsologtostderr -v=1                                                                                                                             │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 19:41:33
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 19:41:33.514682  604470 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:41:33.515411  604470 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:41:33.515424  604470 out.go:374] Setting ErrFile to fd 2...
	I1027 19:41:33.515429  604470 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:41:33.515802  604470 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 19:41:33.516655  604470 out.go:368] Setting JSON to false
	I1027 19:41:33.518426  604470 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8643,"bootTime":1761585451,"procs":466,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 19:41:33.518533  604470 start.go:141] virtualization: kvm guest
	I1027 19:41:33.521798  604470 out.go:179] * [no-preload-095885] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 19:41:33.523807  604470 notify.go:220] Checking for updates...
	I1027 19:41:33.523873  604470 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 19:41:33.525256  604470 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 19:41:33.527429  604470 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:41:33.529037  604470 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-352833/.minikube
	I1027 19:41:33.530518  604470 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 19:41:33.531892  604470 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 19:41:33.533881  604470 config.go:182] Loaded profile config "no-preload-095885": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:41:33.534704  604470 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 19:41:33.565326  604470 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 19:41:33.565443  604470 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:41:33.642975  604470 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-27 19:41:33.629380203 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:41:33.643123  604470 docker.go:318] overlay module found
	I1027 19:41:33.645093  604470 out.go:179] * Using the docker driver based on existing profile
	I1027 19:41:33.646962  604470 start.go:305] selected driver: docker
	I1027 19:41:33.646981  604470 start.go:925] validating driver "docker" against &{Name:no-preload-095885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-095885 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:41:33.647102  604470 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 19:41:33.647893  604470 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:41:33.721579  604470 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-27 19:41:33.709869722 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:41:33.721933  604470 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 19:41:33.721961  604470 cni.go:84] Creating CNI manager for ""
	I1027 19:41:33.722022  604470 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:41:33.722069  604470 start.go:349] cluster config:
	{Name:no-preload-095885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-095885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:41:33.726488  604470 out.go:179] * Starting "no-preload-095885" primary control-plane node in "no-preload-095885" cluster
	I1027 19:41:33.728321  604470 cache.go:123] Beginning downloading kic base image for docker with crio
	I1027 19:41:33.729739  604470 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 19:41:33.731046  604470 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:41:33.731164  604470 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 19:41:33.731217  604470 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/config.json ...
	I1027 19:41:33.731442  604470 cache.go:107] acquiring lock: {Name:mk6cfd97bf118a5d00dc3712cc15a56368d5b133 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:41:33.731465  604470 cache.go:107] acquiring lock: {Name:mk849f9e68d9ca24fd7e38d749b2eace2906ff3c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:41:33.731506  604470 cache.go:107] acquiring lock: {Name:mk5369f4c071c5263ddc432fb15330ba0423cdfe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:41:33.731514  604470 cache.go:107] acquiring lock: {Name:mk55852f2c481df2db7f9a6da7c274b8e85d7edb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:41:33.731573  604470 cache.go:115] /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1027 19:41:33.731557  604470 cache.go:107] acquiring lock: {Name:mk5cfaf9a7e19dd9a7184f304b6ee85a4979e6eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:41:33.731591  604470 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 145.415µs
	I1027 19:41:33.731600  604470 cache.go:115] /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1027 19:41:33.731613  604470 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1027 19:41:33.731613  604470 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 115.184µs
	I1027 19:41:33.731579  604470 cache.go:115] /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1027 19:41:33.731628  604470 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1027 19:41:33.731628  604470 cache.go:115] /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1027 19:41:33.731594  604470 cache.go:115] /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1027 19:41:33.731442  604470 cache.go:107] acquiring lock: {Name:mk01b17b21d46030a4c787d0bd4e9fe1b72ed247 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:41:33.731643  604470 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 89.177µs
	I1027 19:41:33.731647  604470 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 142.388µs
	I1027 19:41:33.731636  604470 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 208.835µs
	I1027 19:41:33.731650  604470 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1027 19:41:33.731639  604470 cache.go:107] acquiring lock: {Name:mka4e762c0cdf96fdeade218e5825c211c417983 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:41:33.731669  604470 cache.go:115] /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1027 19:41:33.731656  604470 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1027 19:41:33.731661  604470 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1027 19:41:33.731608  604470 cache.go:107] acquiring lock: {Name:mk2ed104f61ec06a04ca37afb2389902cee0a37d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:41:33.731682  604470 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 258.93µs
	I1027 19:41:33.731825  604470 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1027 19:41:33.731690  604470 cache.go:115] /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1027 19:41:33.731840  604470 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 201.476µs
	I1027 19:41:33.731842  604470 cache.go:115] /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1027 19:41:33.731849  604470 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1027 19:41:33.731856  604470 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 250.202µs
	I1027 19:41:33.731876  604470 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21801-352833/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1027 19:41:33.731896  604470 cache.go:87] Successfully saved all images to host disk.
	I1027 19:41:33.755554  604470 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 19:41:33.755575  604470 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 19:41:33.755596  604470 cache.go:232] Successfully downloaded all kic artifacts
	I1027 19:41:33.755626  604470 start.go:360] acquireMachinesLock for no-preload-095885: {Name:mk5366014920cd048c3c430c094258bb47a34d04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:41:33.755690  604470 start.go:364] duration metric: took 42.502µs to acquireMachinesLock for "no-preload-095885"
	I1027 19:41:33.755710  604470 start.go:96] Skipping create...Using existing machine configuration
	I1027 19:41:33.755718  604470 fix.go:54] fixHost starting: 
	I1027 19:41:33.755966  604470 cli_runner.go:164] Run: docker container inspect no-preload-095885 --format={{.State.Status}}
	I1027 19:41:33.777426  604470 fix.go:112] recreateIfNeeded on no-preload-095885: state=Stopped err=<nil>
	W1027 19:41:33.777478  604470 fix.go:138] unexpected machine state, will restart: <nil>
	I1027 19:41:33.194337  601731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1027 19:41:33.218799  601731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/default-k8s-diff-port-813397/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1027 19:41:33.242500  601731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/default-k8s-diff-port-813397/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 19:41:33.265885  601731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/default-k8s-diff-port-813397/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 19:41:33.290549  601731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/default-k8s-diff-port-813397/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 19:41:33.314587  601731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/certs/356415.pem --> /usr/share/ca-certificates/356415.pem (1338 bytes)
	I1027 19:41:33.338249  601731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem --> /usr/share/ca-certificates/3564152.pem (1708 bytes)
	I1027 19:41:33.361878  601731 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 19:41:33.385457  601731 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 19:41:33.402288  601731 ssh_runner.go:195] Run: openssl version
	I1027 19:41:33.409720  601731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3564152.pem && ln -fs /usr/share/ca-certificates/3564152.pem /etc/ssl/certs/3564152.pem"
	I1027 19:41:33.421080  601731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3564152.pem
	I1027 19:41:33.426177  601731 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 19:02 /usr/share/ca-certificates/3564152.pem
	I1027 19:41:33.426242  601731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3564152.pem
	I1027 19:41:33.470633  601731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3564152.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 19:41:33.481461  601731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 19:41:33.493492  601731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:41:33.498757  601731 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:41:33.498838  601731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:41:33.542807  601731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 19:41:33.553991  601731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/356415.pem && ln -fs /usr/share/ca-certificates/356415.pem /etc/ssl/certs/356415.pem"
	I1027 19:41:33.566061  601731 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356415.pem
	I1027 19:41:33.570984  601731 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 19:02 /usr/share/ca-certificates/356415.pem
	I1027 19:41:33.571064  601731 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356415.pem
	I1027 19:41:33.629950  601731 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/356415.pem /etc/ssl/certs/51391683.0"
	I1027 19:41:33.642594  601731 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 19:41:33.647802  601731 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 19:41:33.647868  601731 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-813397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-813397 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:41:33.647939  601731 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 19:41:33.647995  601731 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 19:41:33.695399  601731 cri.go:89] found id: ""
	I1027 19:41:33.696577  601731 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 19:41:33.708281  601731 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 19:41:33.718397  601731 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1027 19:41:33.718470  601731 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 19:41:33.728790  601731 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 19:41:33.728808  601731 kubeadm.go:157] found existing configuration files:
	
	I1027 19:41:33.728869  601731 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1027 19:41:33.738176  601731 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 19:41:33.738253  601731 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 19:41:33.747937  601731 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1027 19:41:33.758236  601731 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 19:41:33.758298  601731 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 19:41:33.767392  601731 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1027 19:41:33.777962  601731 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 19:41:33.778033  601731 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 19:41:33.788710  601731 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1027 19:41:33.799716  601731 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 19:41:33.799778  601731 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 19:41:33.809879  601731 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 19:41:33.861238  601731 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1027 19:41:33.861332  601731 kubeadm.go:318] [preflight] Running pre-flight checks
	I1027 19:41:33.907926  601731 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1027 19:41:33.908017  601731 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1027 19:41:33.908061  601731 kubeadm.go:318] OS: Linux
	I1027 19:41:33.908119  601731 kubeadm.go:318] CGROUPS_CPU: enabled
	I1027 19:41:33.908222  601731 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1027 19:41:33.908299  601731 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1027 19:41:33.908409  601731 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1027 19:41:33.908489  601731 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1027 19:41:33.908553  601731 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1027 19:41:33.908641  601731 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1027 19:41:33.908719  601731 kubeadm.go:318] CGROUPS_IO: enabled
	I1027 19:41:33.984763  601731 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 19:41:33.984961  601731 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 19:41:33.985176  601731 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 19:41:33.993580  601731 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1027 19:41:31.563826  594803 pod_ready.go:104] pod "coredns-66bc5c9577-9b9tz" is not "Ready", error: <nil>
	W1027 19:41:33.564360  594803 pod_ready.go:104] pod "coredns-66bc5c9577-9b9tz" is not "Ready", error: <nil>
	I1027 19:41:33.140207  565798 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1027 19:41:33.140728  565798 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1027 19:41:33.140789  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:41:33.140851  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:41:33.175830  565798 cri.go:89] found id: "f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:41:33.175856  565798 cri.go:89] found id: ""
	I1027 19:41:33.175867  565798 logs.go:282] 1 containers: [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8]
	I1027 19:41:33.175931  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:33.180762  565798 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:41:33.180837  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:41:33.221850  565798 cri.go:89] found id: ""
	I1027 19:41:33.221877  565798 logs.go:282] 0 containers: []
	W1027 19:41:33.221885  565798 logs.go:284] No container was found matching "etcd"
	I1027 19:41:33.221891  565798 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:41:33.221938  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:41:33.258954  565798 cri.go:89] found id: ""
	I1027 19:41:33.258985  565798 logs.go:282] 0 containers: []
	W1027 19:41:33.258997  565798 logs.go:284] No container was found matching "coredns"
	I1027 19:41:33.259005  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:41:33.259063  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:41:33.291276  565798 cri.go:89] found id: "15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:41:33.291297  565798 cri.go:89] found id: ""
	I1027 19:41:33.291307  565798 logs.go:282] 1 containers: [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8]
	I1027 19:41:33.291378  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:33.295942  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:41:33.296011  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:41:33.327200  565798 cri.go:89] found id: ""
	I1027 19:41:33.327230  565798 logs.go:282] 0 containers: []
	W1027 19:41:33.327241  565798 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:41:33.327250  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:41:33.327332  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:41:33.360699  565798 cri.go:89] found id: "38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:33.360724  565798 cri.go:89] found id: ""
	I1027 19:41:33.360735  565798 logs.go:282] 1 containers: [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77]
	I1027 19:41:33.360801  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:33.366056  565798 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:41:33.366187  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:41:33.397708  565798 cri.go:89] found id: ""
	I1027 19:41:33.397739  565798 logs.go:282] 0 containers: []
	W1027 19:41:33.397758  565798 logs.go:284] No container was found matching "kindnet"
	I1027 19:41:33.397767  565798 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:41:33.397834  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:41:33.431241  565798 cri.go:89] found id: ""
	I1027 19:41:33.431280  565798 logs.go:282] 0 containers: []
	W1027 19:41:33.431291  565798 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:41:33.431305  565798 logs.go:123] Gathering logs for dmesg ...
	I1027 19:41:33.431324  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:41:33.457468  565798 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:41:33.457511  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:41:33.527661  565798 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:41:33.527677  565798 logs.go:123] Gathering logs for kube-apiserver [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8] ...
	I1027 19:41:33.527691  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:41:33.571916  565798 logs.go:123] Gathering logs for kube-scheduler [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8] ...
	I1027 19:41:33.572034  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:41:33.653029  565798 logs.go:123] Gathering logs for kube-controller-manager [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77] ...
	I1027 19:41:33.653063  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:33.694773  565798 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:41:33.694814  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:41:33.753557  565798 logs.go:123] Gathering logs for container status ...
	I1027 19:41:33.753603  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:41:33.792559  565798 logs.go:123] Gathering logs for kubelet ...
	I1027 19:41:33.792601  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:41:36.406613  565798 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1027 19:41:36.407062  565798 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1027 19:41:36.407124  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:41:36.407210  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:41:36.437491  565798 cri.go:89] found id: "f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:41:36.437514  565798 cri.go:89] found id: ""
	I1027 19:41:36.437525  565798 logs.go:282] 1 containers: [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8]
	I1027 19:41:36.437589  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:36.442000  565798 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:41:36.442074  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:41:36.469993  565798 cri.go:89] found id: ""
	I1027 19:41:36.470025  565798 logs.go:282] 0 containers: []
	W1027 19:41:36.470034  565798 logs.go:284] No container was found matching "etcd"
	I1027 19:41:36.470043  565798 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:41:36.470125  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:41:36.498575  565798 cri.go:89] found id: ""
	I1027 19:41:36.498617  565798 logs.go:282] 0 containers: []
	W1027 19:41:36.498629  565798 logs.go:284] No container was found matching "coredns"
	I1027 19:41:36.498638  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:41:36.498692  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:41:36.528423  565798 cri.go:89] found id: "15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:41:36.528443  565798 cri.go:89] found id: ""
	I1027 19:41:36.528452  565798 logs.go:282] 1 containers: [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8]
	I1027 19:41:36.528501  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:36.532552  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:41:36.532614  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:41:33.996937  601731 out.go:252]   - Generating certificates and keys ...
	I1027 19:41:33.997063  601731 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1027 19:41:33.997199  601731 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1027 19:41:34.054826  601731 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 19:41:34.221369  601731 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1027 19:41:34.781385  601731 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1027 19:41:35.318555  601731 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1027 19:41:35.767616  601731 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1027 19:41:35.767790  601731 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-813397 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1027 19:41:36.405347  601731 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1027 19:41:36.405616  601731 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-813397 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1027 19:41:36.791820  601731 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 19:41:37.058751  601731 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 19:41:37.258786  601731 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1027 19:41:37.258878  601731 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 19:41:37.352340  601731 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 19:41:37.607719  601731 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 19:41:37.743836  601731 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 19:41:38.112562  601731 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 19:41:38.293385  601731 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 19:41:38.294242  601731 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 19:41:38.298814  601731 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 19:41:33.779666  604470 out.go:252] * Restarting existing docker container for "no-preload-095885" ...
	I1027 19:41:33.779763  604470 cli_runner.go:164] Run: docker start no-preload-095885
	I1027 19:41:34.076071  604470 cli_runner.go:164] Run: docker container inspect no-preload-095885 --format={{.State.Status}}
	I1027 19:41:34.094890  604470 kic.go:430] container "no-preload-095885" state is running.
	I1027 19:41:34.095320  604470 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-095885
	I1027 19:41:34.114967  604470 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/config.json ...
	I1027 19:41:34.115303  604470 machine.go:93] provisionDockerMachine start ...
	I1027 19:41:34.115382  604470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-095885
	I1027 19:41:34.136967  604470 main.go:141] libmachine: Using SSH client type: native
	I1027 19:41:34.137304  604470 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33455 <nil> <nil>}
	I1027 19:41:34.137322  604470 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 19:41:34.137898  604470 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48310->127.0.0.1:33455: read: connection reset by peer
	I1027 19:41:37.281569  604470 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-095885
	
	I1027 19:41:37.281596  604470 ubuntu.go:182] provisioning hostname "no-preload-095885"
	I1027 19:41:37.281656  604470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-095885
	I1027 19:41:37.301392  604470 main.go:141] libmachine: Using SSH client type: native
	I1027 19:41:37.301645  604470 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33455 <nil> <nil>}
	I1027 19:41:37.301664  604470 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-095885 && echo "no-preload-095885" | sudo tee /etc/hostname
	I1027 19:41:37.455099  604470 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-095885
	
	I1027 19:41:37.455202  604470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-095885
	I1027 19:41:37.475318  604470 main.go:141] libmachine: Using SSH client type: native
	I1027 19:41:37.475622  604470 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33455 <nil> <nil>}
	I1027 19:41:37.475644  604470 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-095885' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-095885/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-095885' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 19:41:37.621398  604470 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 19:41:37.621435  604470 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21801-352833/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-352833/.minikube}
	I1027 19:41:37.621491  604470 ubuntu.go:190] setting up certificates
	I1027 19:41:37.621510  604470 provision.go:84] configureAuth start
	I1027 19:41:37.621595  604470 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-095885
	I1027 19:41:37.641096  604470 provision.go:143] copyHostCerts
	I1027 19:41:37.641197  604470 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem, removing ...
	I1027 19:41:37.641215  604470 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem
	I1027 19:41:37.641290  604470 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem (1078 bytes)
	I1027 19:41:37.641404  604470 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem, removing ...
	I1027 19:41:37.641413  604470 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem
	I1027 19:41:37.641451  604470 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem (1123 bytes)
	I1027 19:41:37.641526  604470 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem, removing ...
	I1027 19:41:37.641534  604470 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem
	I1027 19:41:37.641561  604470 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem (1679 bytes)
	I1027 19:41:37.641631  604470 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem org=jenkins.no-preload-095885 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-095885]
	I1027 19:41:37.972712  604470 provision.go:177] copyRemoteCerts
	I1027 19:41:37.972793  604470 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 19:41:37.972845  604470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-095885
	I1027 19:41:37.992046  604470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/no-preload-095885/id_rsa Username:docker}
	I1027 19:41:38.095494  604470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 19:41:38.115591  604470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1027 19:41:38.137819  604470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1027 19:41:38.158115  604470 provision.go:87] duration metric: took 536.582587ms to configureAuth
	I1027 19:41:38.158163  604470 ubuntu.go:206] setting minikube options for container-runtime
	I1027 19:41:38.158375  604470 config.go:182] Loaded profile config "no-preload-095885": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:41:38.158491  604470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-095885
	I1027 19:41:38.179245  604470 main.go:141] libmachine: Using SSH client type: native
	I1027 19:41:38.179483  604470 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33455 <nil> <nil>}
	I1027 19:41:38.179503  604470 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 19:41:38.522710  604470 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 19:41:38.522739  604470 machine.go:96] duration metric: took 4.407414728s to provisionDockerMachine
	I1027 19:41:38.522754  604470 start.go:293] postStartSetup for "no-preload-095885" (driver="docker")
	I1027 19:41:38.522769  604470 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 19:41:38.522844  604470 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 19:41:38.522904  604470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-095885
	I1027 19:41:38.545315  604470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/no-preload-095885/id_rsa Username:docker}
	I1027 19:41:38.649488  604470 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 19:41:38.653619  604470 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 19:41:38.653659  604470 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 19:41:38.653672  604470 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-352833/.minikube/addons for local assets ...
	I1027 19:41:38.653730  604470 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-352833/.minikube/files for local assets ...
	I1027 19:41:38.653828  604470 filesync.go:149] local asset: /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem -> 3564152.pem in /etc/ssl/certs
	I1027 19:41:38.653958  604470 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 19:41:38.662910  604470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem --> /etc/ssl/certs/3564152.pem (1708 bytes)
	I1027 19:41:38.683366  604470 start.go:296] duration metric: took 160.591003ms for postStartSetup
	I1027 19:41:38.683460  604470 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 19:41:38.683508  604470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-095885
	I1027 19:41:38.702733  604470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/no-preload-095885/id_rsa Username:docker}
	I1027 19:41:38.804002  604470 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 19:41:38.809096  604470 fix.go:56] duration metric: took 5.05336892s for fixHost
	I1027 19:41:38.809130  604470 start.go:83] releasing machines lock for "no-preload-095885", held for 5.053425647s
	I1027 19:41:38.809225  604470 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-095885
	I1027 19:41:38.827272  604470 ssh_runner.go:195] Run: cat /version.json
	I1027 19:41:38.827356  604470 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 19:41:38.827387  604470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-095885
	I1027 19:41:38.827418  604470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-095885
	I1027 19:41:38.847513  604470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/no-preload-095885/id_rsa Username:docker}
	I1027 19:41:38.847921  604470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/no-preload-095885/id_rsa Username:docker}
	I1027 19:41:39.000830  604470 ssh_runner.go:195] Run: systemctl --version
	I1027 19:41:39.008003  604470 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 19:41:39.044407  604470 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 19:41:39.049507  604470 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 19:41:39.049581  604470 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 19:41:39.058452  604470 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 19:41:39.058481  604470 start.go:495] detecting cgroup driver to use...
	I1027 19:41:39.058522  604470 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 19:41:39.058578  604470 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 19:41:39.075128  604470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 19:41:39.089607  604470 docker.go:218] disabling cri-docker service (if available) ...
	I1027 19:41:39.089705  604470 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 19:41:39.106103  604470 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 19:41:39.120124  604470 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 19:41:39.207086  604470 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 19:41:39.309063  604470 docker.go:234] disabling docker service ...
	I1027 19:41:39.309129  604470 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 19:41:39.330558  604470 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 19:41:39.352231  604470 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 19:41:39.447280  604470 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 19:41:39.539870  604470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 19:41:39.554998  604470 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 19:41:39.574582  604470 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 19:41:39.574652  604470 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:41:39.586162  604470 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 19:41:39.586238  604470 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:41:39.596423  604470 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:41:39.606735  604470 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:41:39.617112  604470 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 19:41:39.627091  604470 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:41:39.637722  604470 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:41:39.647475  604470 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:41:39.657461  604470 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 19:41:39.665620  604470 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 19:41:39.673923  604470 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:41:39.785097  604470 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 19:41:39.913123  604470 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 19:41:39.913197  604470 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 19:41:39.918027  604470 start.go:563] Will wait 60s for crictl version
	I1027 19:41:39.918097  604470 ssh_runner.go:195] Run: which crictl
	I1027 19:41:39.922727  604470 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 19:41:39.953577  604470 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 19:41:39.953737  604470 ssh_runner.go:195] Run: crio --version
	I1027 19:41:39.995993  604470 ssh_runner.go:195] Run: crio --version
	I1027 19:41:40.036496  604470 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1027 19:41:36.063623  594803 pod_ready.go:104] pod "coredns-66bc5c9577-9b9tz" is not "Ready", error: <nil>
	W1027 19:41:38.562890  594803 pod_ready.go:104] pod "coredns-66bc5c9577-9b9tz" is not "Ready", error: <nil>
	W1027 19:41:40.565556  594803 pod_ready.go:104] pod "coredns-66bc5c9577-9b9tz" is not "Ready", error: <nil>
	I1027 19:41:36.560462  565798 cri.go:89] found id: ""
	I1027 19:41:36.560492  565798 logs.go:282] 0 containers: []
	W1027 19:41:36.560504  565798 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:41:36.560512  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:41:36.560572  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:41:36.590892  565798 cri.go:89] found id: "38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:36.590915  565798 cri.go:89] found id: ""
	I1027 19:41:36.590925  565798 logs.go:282] 1 containers: [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77]
	I1027 19:41:36.590990  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:36.595427  565798 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:41:36.595508  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:41:36.625282  565798 cri.go:89] found id: ""
	I1027 19:41:36.625317  565798 logs.go:282] 0 containers: []
	W1027 19:41:36.625329  565798 logs.go:284] No container was found matching "kindnet"
	I1027 19:41:36.625337  565798 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:41:36.625387  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:41:36.654526  565798 cri.go:89] found id: ""
	I1027 19:41:36.654551  565798 logs.go:282] 0 containers: []
	W1027 19:41:36.654559  565798 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:41:36.654570  565798 logs.go:123] Gathering logs for container status ...
	I1027 19:41:36.654585  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:41:36.686830  565798 logs.go:123] Gathering logs for kubelet ...
	I1027 19:41:36.686863  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:41:36.773949  565798 logs.go:123] Gathering logs for dmesg ...
	I1027 19:41:36.773992  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:41:36.795686  565798 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:41:36.795715  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:41:36.869593  565798 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:41:36.869626  565798 logs.go:123] Gathering logs for kube-apiserver [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8] ...
	I1027 19:41:36.869642  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:41:36.904315  565798 logs.go:123] Gathering logs for kube-scheduler [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8] ...
	I1027 19:41:36.904350  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:41:36.955277  565798 logs.go:123] Gathering logs for kube-controller-manager [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77] ...
	I1027 19:41:36.955316  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:36.989612  565798 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:41:36.989642  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:41:39.538232  565798 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1027 19:41:40.038022  604470 cli_runner.go:164] Run: docker network inspect no-preload-095885 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 19:41:40.060438  604470 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1027 19:41:40.066124  604470 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 19:41:40.082925  604470 kubeadm.go:883] updating cluster {Name:no-preload-095885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-095885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 19:41:40.083064  604470 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:41:40.083105  604470 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 19:41:40.128492  604470 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 19:41:40.128525  604470 cache_images.go:85] Images are preloaded, skipping loading
	I1027 19:41:40.128535  604470 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1027 19:41:40.128679  604470 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-095885 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-095885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1027 19:41:40.128786  604470 ssh_runner.go:195] Run: crio config
	I1027 19:41:40.190906  604470 cni.go:84] Creating CNI manager for ""
	I1027 19:41:40.190946  604470 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:41:40.190977  604470 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 19:41:40.191009  604470 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-095885 NodeName:no-preload-095885 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 19:41:40.191306  604470 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-095885"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 19:41:40.191421  604470 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 19:41:40.203956  604470 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 19:41:40.204041  604470 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 19:41:40.215343  604470 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1027 19:41:40.233720  604470 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 19:41:40.252697  604470 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1027 19:41:40.272821  604470 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1027 19:41:40.278144  604470 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 19:41:40.291130  604470 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:41:40.409925  604470 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:41:40.442023  604470 certs.go:69] Setting up /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885 for IP: 192.168.76.2
	I1027 19:41:40.442046  604470 certs.go:195] generating shared ca certs ...
	I1027 19:41:40.442068  604470 certs.go:227] acquiring lock for ca certs: {Name:mk4bdbca32068f6f817fc35fdc496e961dc3e0d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:41:40.442266  604470 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21801-352833/.minikube/ca.key
	I1027 19:41:40.442349  604470 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.key
	I1027 19:41:40.442366  604470 certs.go:257] generating profile certs ...
	I1027 19:41:40.442471  604470 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/client.key
	I1027 19:41:40.442549  604470 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/apiserver.key.e3f5f1b4
	I1027 19:41:40.442592  604470 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/proxy-client.key
	I1027 19:41:40.442739  604470 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/356415.pem (1338 bytes)
	W1027 19:41:40.442783  604470 certs.go:480] ignoring /home/jenkins/minikube-integration/21801-352833/.minikube/certs/356415_empty.pem, impossibly tiny 0 bytes
	I1027 19:41:40.442797  604470 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 19:41:40.442829  604470 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem (1078 bytes)
	I1027 19:41:40.442860  604470 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem (1123 bytes)
	I1027 19:41:40.442893  604470 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem (1679 bytes)
	I1027 19:41:40.442943  604470 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem (1708 bytes)
	I1027 19:41:40.443783  604470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 19:41:40.472100  604470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 19:41:40.499393  604470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 19:41:40.537262  604470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1027 19:41:40.579913  604470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1027 19:41:40.608811  604470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1027 19:41:40.632386  604470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 19:41:40.656260  604470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/no-preload-095885/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 19:41:40.680265  604470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 19:41:40.705607  604470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/certs/356415.pem --> /usr/share/ca-certificates/356415.pem (1338 bytes)
	I1027 19:41:40.736685  604470 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem --> /usr/share/ca-certificates/3564152.pem (1708 bytes)
	I1027 19:41:40.757100  604470 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 19:41:40.772409  604470 ssh_runner.go:195] Run: openssl version
	I1027 19:41:40.780365  604470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3564152.pem && ln -fs /usr/share/ca-certificates/3564152.pem /etc/ssl/certs/3564152.pem"
	I1027 19:41:40.793422  604470 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3564152.pem
	I1027 19:41:40.799200  604470 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 19:02 /usr/share/ca-certificates/3564152.pem
	I1027 19:41:40.799308  604470 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3564152.pem
	I1027 19:41:40.858798  604470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3564152.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 19:41:40.868961  604470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 19:41:40.880504  604470 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:41:40.885790  604470 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:41:40.885859  604470 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:41:40.924743  604470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 19:41:40.934706  604470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/356415.pem && ln -fs /usr/share/ca-certificates/356415.pem /etc/ssl/certs/356415.pem"
	I1027 19:41:40.946199  604470 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356415.pem
	I1027 19:41:40.950939  604470 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 19:02 /usr/share/ca-certificates/356415.pem
	I1027 19:41:40.951005  604470 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356415.pem
	I1027 19:41:41.000533  604470 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/356415.pem /etc/ssl/certs/51391683.0"
	I1027 19:41:41.014358  604470 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 19:41:41.021053  604470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1027 19:41:41.084013  604470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1027 19:41:41.148741  604470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1027 19:41:41.218562  604470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1027 19:41:41.288867  604470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1027 19:41:41.353211  604470 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1027 19:41:41.427442  604470 kubeadm.go:400] StartCluster: {Name:no-preload-095885 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-095885 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:41:41.427559  604470 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 19:41:41.427629  604470 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 19:41:41.468277  604470 cri.go:89] found id: "5cea35874d5acf206b55e45b05f38d78ea9509d27b883c670c280fce93719392"
	I1027 19:41:41.468307  604470 cri.go:89] found id: "6027c707b2e6435987becfbc61cef802217623f703bccb12bb5716bc98c873a9"
	I1027 19:41:41.468328  604470 cri.go:89] found id: "b35fe833b6d5250c5b516a89c49b8f3808e23967fa3f1a0150b2cd20ac6d55ea"
	I1027 19:41:41.468332  604470 cri.go:89] found id: "781c3a34fe9cc4350ebd3342ca9b66e12ce9f3e6795ee22c7d4ed1e31f9fcd7c"
	I1027 19:41:41.468337  604470 cri.go:89] found id: ""
	I1027 19:41:41.468385  604470 ssh_runner.go:195] Run: sudo runc list -f json
	W1027 19:41:41.496192  604470 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:41:41Z" level=error msg="open /run/runc: no such file or directory"
	I1027 19:41:41.496273  604470 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 19:41:41.519529  604470 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1027 19:41:41.519555  604470 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1027 19:41:41.519608  604470 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1027 19:41:41.538281  604470 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1027 19:41:41.539465  604470 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-095885" does not appear in /home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:41:41.540311  604470 kubeconfig.go:62] /home/jenkins/minikube-integration/21801-352833/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-095885" cluster setting kubeconfig missing "no-preload-095885" context setting]
	I1027 19:41:41.541309  604470 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/kubeconfig: {Name:mk24cbe512a6907c874f3fb7a85390a8f9fd2b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:41:41.543816  604470 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1027 19:41:41.556019  604470 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.76.2
	I1027 19:41:41.556065  604470 kubeadm.go:601] duration metric: took 36.50344ms to restartPrimaryControlPlane
	I1027 19:41:41.556079  604470 kubeadm.go:402] duration metric: took 128.653659ms to StartCluster
	I1027 19:41:41.556104  604470 settings.go:142] acquiring lock: {Name:mk8304c2106bf310642e0949fc0266ccb50f2f95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:41:41.556210  604470 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:41:41.558163  604470 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/kubeconfig: {Name:mk24cbe512a6907c874f3fb7a85390a8f9fd2b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:41:41.558563  604470 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 19:41:41.558751  604470 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 19:41:41.558843  604470 config.go:182] Loaded profile config "no-preload-095885": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:41:41.558856  604470 addons.go:69] Setting storage-provisioner=true in profile "no-preload-095885"
	I1027 19:41:41.558874  604470 addons.go:238] Setting addon storage-provisioner=true in "no-preload-095885"
	W1027 19:41:41.558881  604470 addons.go:247] addon storage-provisioner should already be in state true
	I1027 19:41:41.558897  604470 addons.go:69] Setting default-storageclass=true in profile "no-preload-095885"
	I1027 19:41:41.558899  604470 addons.go:69] Setting dashboard=true in profile "no-preload-095885"
	I1027 19:41:41.558910  604470 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-095885"
	I1027 19:41:41.558913  604470 host.go:66] Checking if "no-preload-095885" exists ...
	I1027 19:41:41.558923  604470 addons.go:238] Setting addon dashboard=true in "no-preload-095885"
	W1027 19:41:41.558933  604470 addons.go:247] addon dashboard should already be in state true
	I1027 19:41:41.558968  604470 host.go:66] Checking if "no-preload-095885" exists ...
	I1027 19:41:41.559246  604470 cli_runner.go:164] Run: docker container inspect no-preload-095885 --format={{.State.Status}}
	I1027 19:41:41.559442  604470 cli_runner.go:164] Run: docker container inspect no-preload-095885 --format={{.State.Status}}
	I1027 19:41:41.559447  604470 cli_runner.go:164] Run: docker container inspect no-preload-095885 --format={{.State.Status}}
	I1027 19:41:41.560596  604470 out.go:179] * Verifying Kubernetes components...
	I1027 19:41:41.562119  604470 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:41:41.595253  604470 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:41:41.596954  604470 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:41:41.596978  604470 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 19:41:41.596977  604470 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1027 19:41:41.597040  604470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-095885
	I1027 19:41:41.599089  604470 addons.go:238] Setting addon default-storageclass=true in "no-preload-095885"
	W1027 19:41:41.599564  604470 addons.go:247] addon default-storageclass should already be in state true
	I1027 19:41:41.599660  604470 host.go:66] Checking if "no-preload-095885" exists ...
	I1027 19:41:41.600184  604470 cli_runner.go:164] Run: docker container inspect no-preload-095885 --format={{.State.Status}}
	I1027 19:41:41.603401  604470 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1027 19:41:38.300298  601731 out.go:252]   - Booting up control plane ...
	I1027 19:41:38.300417  601731 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 19:41:38.300494  601731 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 19:41:38.301269  601731 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 19:41:38.317615  601731 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 19:41:38.317796  601731 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 19:41:38.325393  601731 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 19:41:38.325697  601731 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 19:41:38.325760  601731 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1027 19:41:38.436333  601731 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 19:41:38.436537  601731 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 19:41:38.938161  601731 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 502.02341ms
	I1027 19:41:38.941231  601731 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 19:41:38.941369  601731 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1027 19:41:38.941482  601731 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 19:41:38.941555  601731 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 19:41:41.544505  601731 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.602250744s
	I1027 19:41:41.585376  601731 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.643972656s
	I1027 19:41:43.443119  601731 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.501736668s
	I1027 19:41:43.458876  601731 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 19:41:43.475075  601731 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 19:41:43.490047  601731 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 19:41:43.490469  601731 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-813397 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 19:41:43.505486  601731 kubeadm.go:318] [bootstrap-token] Using token: krqx3o.862otuv3ceo9vh3t
	I1027 19:41:41.604616  604470 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1027 19:41:41.604640  604470 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1027 19:41:41.604722  604470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-095885
	I1027 19:41:41.642076  604470 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 19:41:41.642102  604470 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 19:41:41.642128  604470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/no-preload-095885/id_rsa Username:docker}
	I1027 19:41:41.642178  604470 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-095885
	I1027 19:41:41.649571  604470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/no-preload-095885/id_rsa Username:docker}
	I1027 19:41:41.669489  604470 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/no-preload-095885/id_rsa Username:docker}
	I1027 19:41:41.769731  604470 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:41:41.800064  604470 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:41:41.803061  604470 node_ready.go:35] waiting up to 6m0s for node "no-preload-095885" to be "Ready" ...
	I1027 19:41:41.815648  604470 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1027 19:41:41.815682  604470 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1027 19:41:41.833776  604470 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1027 19:41:41.833808  604470 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1027 19:41:41.838523  604470 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 19:41:41.852112  604470 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1027 19:41:41.852172  604470 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1027 19:41:41.868978  604470 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1027 19:41:41.869012  604470 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1027 19:41:41.886677  604470 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1027 19:41:41.886718  604470 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1027 19:41:41.902981  604470 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1027 19:41:41.903014  604470 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1027 19:41:41.919453  604470 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1027 19:41:41.919481  604470 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1027 19:41:41.934909  604470 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1027 19:41:41.934944  604470 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1027 19:41:41.950310  604470 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 19:41:41.950339  604470 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1027 19:41:41.965316  604470 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1027 19:41:43.194555  604470 node_ready.go:49] node "no-preload-095885" is "Ready"
	I1027 19:41:43.194602  604470 node_ready.go:38] duration metric: took 1.391504473s for node "no-preload-095885" to be "Ready" ...
	I1027 19:41:43.194623  604470 api_server.go:52] waiting for apiserver process to appear ...
	I1027 19:41:43.194689  604470 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:41:43.814690  604470 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.014578847s)
	I1027 19:41:43.814719  604470 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.976163054s)
	I1027 19:41:43.814864  604470 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.849510648s)
	I1027 19:41:43.814904  604470 api_server.go:72] duration metric: took 2.256298641s to wait for apiserver process to appear ...
	I1027 19:41:43.814916  604470 api_server.go:88] waiting for apiserver healthz status ...
	I1027 19:41:43.814944  604470 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 19:41:43.816784  604470 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-095885 addons enable metrics-server
	
	I1027 19:41:43.819948  604470 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 19:41:43.819980  604470 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 19:41:43.822914  604470 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1027 19:41:43.507088  601731 out.go:252]   - Configuring RBAC rules ...
	I1027 19:41:43.507275  601731 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 19:41:43.512036  601731 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 19:41:43.521909  601731 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 19:41:43.525338  601731 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 19:41:43.530291  601731 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 19:41:43.534973  601731 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 19:41:43.852323  601731 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 19:41:44.278701  601731 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1027 19:41:44.852103  601731 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1027 19:41:44.852155  601731 kubeadm.go:318] 
	I1027 19:41:44.852218  601731 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1027 19:41:44.852228  601731 kubeadm.go:318] 
	I1027 19:41:44.852323  601731 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1027 19:41:44.852334  601731 kubeadm.go:318] 
	I1027 19:41:44.852367  601731 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1027 19:41:44.852488  601731 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 19:41:44.852606  601731 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 19:41:44.852620  601731 kubeadm.go:318] 
	I1027 19:41:44.852745  601731 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1027 19:41:44.852764  601731 kubeadm.go:318] 
	I1027 19:41:44.852840  601731 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 19:41:44.852846  601731 kubeadm.go:318] 
	I1027 19:41:44.852918  601731 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1027 19:41:44.853072  601731 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 19:41:44.853189  601731 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 19:41:44.853199  601731 kubeadm.go:318] 
	I1027 19:41:44.853305  601731 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 19:41:44.853397  601731 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1027 19:41:44.853405  601731 kubeadm.go:318] 
	I1027 19:41:44.853501  601731 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token krqx3o.862otuv3ceo9vh3t \
	I1027 19:41:44.853623  601731 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ab29e81999671591f366788f5ae9ffb132789ebc71f7c0efdaecd38575a5ab6a \
	I1027 19:41:44.853652  601731 kubeadm.go:318] 	--control-plane 
	I1027 19:41:44.853660  601731 kubeadm.go:318] 
	I1027 19:41:44.853756  601731 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1027 19:41:44.853763  601731 kubeadm.go:318] 
	I1027 19:41:44.853857  601731 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token krqx3o.862otuv3ceo9vh3t \
	I1027 19:41:44.853980  601731 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ab29e81999671591f366788f5ae9ffb132789ebc71f7c0efdaecd38575a5ab6a 
	I1027 19:41:44.858070  601731 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1027 19:41:44.858260  601731 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 19:41:44.858298  601731 cni.go:84] Creating CNI manager for ""
	I1027 19:41:44.858315  601731 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:41:44.860093  601731 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1027 19:41:43.072036  594803 pod_ready.go:104] pod "coredns-66bc5c9577-9b9tz" is not "Ready", error: <nil>
	I1027 19:41:45.064331  594803 pod_ready.go:94] pod "coredns-66bc5c9577-9b9tz" is "Ready"
	I1027 19:41:45.064367  594803 pod_ready.go:86] duration metric: took 33.507313991s for pod "coredns-66bc5c9577-9b9tz" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:45.067917  594803 pod_ready.go:83] waiting for pod "etcd-embed-certs-919237" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:45.073097  594803 pod_ready.go:94] pod "etcd-embed-certs-919237" is "Ready"
	I1027 19:41:45.073166  594803 pod_ready.go:86] duration metric: took 5.183663ms for pod "etcd-embed-certs-919237" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:45.076002  594803 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-919237" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:45.081205  594803 pod_ready.go:94] pod "kube-apiserver-embed-certs-919237" is "Ready"
	I1027 19:41:45.081236  594803 pod_ready.go:86] duration metric: took 5.199151ms for pod "kube-apiserver-embed-certs-919237" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:45.083862  594803 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-919237" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:45.261326  594803 pod_ready.go:94] pod "kube-controller-manager-embed-certs-919237" is "Ready"
	I1027 19:41:45.261363  594803 pod_ready.go:86] duration metric: took 177.47609ms for pod "kube-controller-manager-embed-certs-919237" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:45.460944  594803 pod_ready.go:83] waiting for pod "kube-proxy-rrq2h" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:45.861056  594803 pod_ready.go:94] pod "kube-proxy-rrq2h" is "Ready"
	I1027 19:41:45.861085  594803 pod_ready.go:86] duration metric: took 400.103982ms for pod "kube-proxy-rrq2h" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:46.060781  594803 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-919237" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:46.460405  594803 pod_ready.go:94] pod "kube-scheduler-embed-certs-919237" is "Ready"
	I1027 19:41:46.460440  594803 pod_ready.go:86] duration metric: took 399.626731ms for pod "kube-scheduler-embed-certs-919237" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:41:46.460457  594803 pod_ready.go:40] duration metric: took 34.907882675s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 19:41:46.509120  594803 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 19:41:46.510744  594803 out.go:179] * Done! kubectl is now configured to use "embed-certs-919237" cluster and "default" namespace by default
	I1027 19:41:44.538618  565798 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1027 19:41:44.538685  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:41:44.538754  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:41:44.570896  565798 cri.go:89] found id: "ca67cda12e0adb415e229ae9e136a15743c92bb79ef8987bb33523c43775a99e"
	I1027 19:41:44.570916  565798 cri.go:89] found id: "f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:41:44.570920  565798 cri.go:89] found id: ""
	I1027 19:41:44.570928  565798 logs.go:282] 2 containers: [ca67cda12e0adb415e229ae9e136a15743c92bb79ef8987bb33523c43775a99e f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8]
	I1027 19:41:44.570991  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:44.575567  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:44.580108  565798 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:41:44.580192  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:41:44.610459  565798 cri.go:89] found id: ""
	I1027 19:41:44.610487  565798 logs.go:282] 0 containers: []
	W1027 19:41:44.610495  565798 logs.go:284] No container was found matching "etcd"
	I1027 19:41:44.610501  565798 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:41:44.610551  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:41:44.645683  565798 cri.go:89] found id: ""
	I1027 19:41:44.645709  565798 logs.go:282] 0 containers: []
	W1027 19:41:44.645718  565798 logs.go:284] No container was found matching "coredns"
	I1027 19:41:44.645724  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:41:44.645789  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:41:44.682434  565798 cri.go:89] found id: "15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:41:44.682460  565798 cri.go:89] found id: ""
	I1027 19:41:44.682470  565798 logs.go:282] 1 containers: [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8]
	I1027 19:41:44.682555  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:44.687961  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:41:44.688032  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:41:44.721808  565798 cri.go:89] found id: ""
	I1027 19:41:44.721840  565798 logs.go:282] 0 containers: []
	W1027 19:41:44.721853  565798 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:41:44.721862  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:41:44.721927  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:41:44.756857  565798 cri.go:89] found id: "4b0186426a494845ce9fa7af7755d0c2f9549f935b11a34bd738219dd3bfd4f5"
	I1027 19:41:44.756883  565798 cri.go:89] found id: "38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:44.756904  565798 cri.go:89] found id: ""
	I1027 19:41:44.756916  565798 logs.go:282] 2 containers: [4b0186426a494845ce9fa7af7755d0c2f9549f935b11a34bd738219dd3bfd4f5 38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77]
	I1027 19:41:44.756983  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:44.761788  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:44.766868  565798 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:41:44.766946  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:41:44.800248  565798 cri.go:89] found id: ""
	I1027 19:41:44.800279  565798 logs.go:282] 0 containers: []
	W1027 19:41:44.800315  565798 logs.go:284] No container was found matching "kindnet"
	I1027 19:41:44.800324  565798 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:41:44.800395  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:41:44.835666  565798 cri.go:89] found id: ""
	I1027 19:41:44.835706  565798 logs.go:282] 0 containers: []
	W1027 19:41:44.835717  565798 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:41:44.835734  565798 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:41:44.835749  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1027 19:41:44.861324  601731 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 19:41:44.866168  601731 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 19:41:44.866192  601731 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1027 19:41:44.881418  601731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 19:41:45.150403  601731 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 19:41:45.150477  601731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:41:45.150519  601731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-813397 minikube.k8s.io/updated_at=2025_10_27T19_41_45_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f minikube.k8s.io/name=default-k8s-diff-port-813397 minikube.k8s.io/primary=true
	I1027 19:41:45.161982  601731 ops.go:34] apiserver oom_adj: -16
	I1027 19:41:45.254791  601731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:41:45.754999  601731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:41:46.255609  601731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:41:46.755522  601731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:41:47.255889  601731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:41:47.755557  601731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:41:43.824287  604470 addons.go:514] duration metric: took 2.26556207s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1027 19:41:44.315425  604470 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 19:41:44.320715  604470 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1027 19:41:44.320752  604470 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1027 19:41:44.815353  604470 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1027 19:41:44.820262  604470 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1027 19:41:44.821434  604470 api_server.go:141] control plane version: v1.34.1
	I1027 19:41:44.821467  604470 api_server.go:131] duration metric: took 1.006539225s to wait for apiserver health ...
	I1027 19:41:44.821478  604470 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 19:41:44.825589  604470 system_pods.go:59] 8 kube-system pods found
	I1027 19:41:44.825638  604470 system_pods.go:61] "coredns-66bc5c9577-gwqvg" [3bcd75c1-f42f-4252-b1fc-2bdab3c8373e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 19:41:44.825647  604470 system_pods.go:61] "etcd-no-preload-095885" [398272ac-d5cc-44d6-bf2a-3469d316b417] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 19:41:44.825653  604470 system_pods.go:61] "kindnet-8lbz5" [42b05fb3-87d3-412f-ac73-cb73a737aab1] Running
	I1027 19:41:44.825660  604470 system_pods.go:61] "kube-apiserver-no-preload-095885" [d609db88-4097-43b5-b881-a445344edf64] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 19:41:44.825669  604470 system_pods.go:61] "kube-controller-manager-no-preload-095885" [b1bfd486-ed1f-4f8b-a08b-de7739f1dd9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 19:41:44.825678  604470 system_pods.go:61] "kube-proxy-wz64m" [339cb07c-5319-4d8b-ab61-a6d377c2bc61] Running
	I1027 19:41:44.825686  604470 system_pods.go:61] "kube-scheduler-no-preload-095885" [7ba1709a-c913-40f3-833b-bee63057ce6e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 19:41:44.825698  604470 system_pods.go:61] "storage-provisioner" [e8283562-be98-444b-b591-a0239860e729] Running
	I1027 19:41:44.825709  604470 system_pods.go:74] duration metric: took 4.221591ms to wait for pod list to return data ...
	I1027 19:41:44.825723  604470 default_sa.go:34] waiting for default service account to be created ...
	I1027 19:41:44.828240  604470 default_sa.go:45] found service account: "default"
	I1027 19:41:44.828270  604470 default_sa.go:55] duration metric: took 2.538409ms for default service account to be created ...
	I1027 19:41:44.828282  604470 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 19:41:44.926381  604470 system_pods.go:86] 8 kube-system pods found
	I1027 19:41:44.926413  604470 system_pods.go:89] "coredns-66bc5c9577-gwqvg" [3bcd75c1-f42f-4252-b1fc-2bdab3c8373e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 19:41:44.926422  604470 system_pods.go:89] "etcd-no-preload-095885" [398272ac-d5cc-44d6-bf2a-3469d316b417] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 19:41:44.926428  604470 system_pods.go:89] "kindnet-8lbz5" [42b05fb3-87d3-412f-ac73-cb73a737aab1] Running
	I1027 19:41:44.926434  604470 system_pods.go:89] "kube-apiserver-no-preload-095885" [d609db88-4097-43b5-b881-a445344edf64] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 19:41:44.926439  604470 system_pods.go:89] "kube-controller-manager-no-preload-095885" [b1bfd486-ed1f-4f8b-a08b-de7739f1dd9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 19:41:44.926451  604470 system_pods.go:89] "kube-proxy-wz64m" [339cb07c-5319-4d8b-ab61-a6d377c2bc61] Running
	I1027 19:41:44.926456  604470 system_pods.go:89] "kube-scheduler-no-preload-095885" [7ba1709a-c913-40f3-833b-bee63057ce6e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 19:41:44.926460  604470 system_pods.go:89] "storage-provisioner" [e8283562-be98-444b-b591-a0239860e729] Running
	I1027 19:41:44.926469  604470 system_pods.go:126] duration metric: took 98.179751ms to wait for k8s-apps to be running ...
	I1027 19:41:44.926480  604470 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 19:41:44.926529  604470 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:41:44.941077  604470 system_svc.go:56] duration metric: took 14.581965ms WaitForService to wait for kubelet
	I1027 19:41:44.941113  604470 kubeadm.go:586] duration metric: took 3.382507903s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 19:41:44.941151  604470 node_conditions.go:102] verifying NodePressure condition ...
	I1027 19:41:44.946437  604470 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1027 19:41:44.946470  604470 node_conditions.go:123] node cpu capacity is 8
	I1027 19:41:44.946483  604470 node_conditions.go:105] duration metric: took 5.326508ms to run NodePressure ...
	I1027 19:41:44.946497  604470 start.go:241] waiting for startup goroutines ...
	I1027 19:41:44.946504  604470 start.go:246] waiting for cluster config update ...
	I1027 19:41:44.946514  604470 start.go:255] writing updated cluster config ...
	I1027 19:41:44.946761  604470 ssh_runner.go:195] Run: rm -f paused
	I1027 19:41:44.952271  604470 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 19:41:44.957117  604470 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gwqvg" in "kube-system" namespace to be "Ready" or be gone ...
	W1027 19:41:46.963263  604470 pod_ready.go:104] pod "coredns-66bc5c9577-gwqvg" is not "Ready", error: <nil>
	I1027 19:41:48.255289  601731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:41:48.755892  601731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:41:49.255340  601731 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:41:49.333754  601731 kubeadm.go:1113] duration metric: took 4.183323316s to wait for elevateKubeSystemPrivileges
	I1027 19:41:49.333798  601731 kubeadm.go:402] duration metric: took 15.685937442s to StartCluster
	I1027 19:41:49.333821  601731 settings.go:142] acquiring lock: {Name:mk8304c2106bf310642e0949fc0266ccb50f2f95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:41:49.333908  601731 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:41:49.336376  601731 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/kubeconfig: {Name:mk24cbe512a6907c874f3fb7a85390a8f9fd2b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:41:49.336733  601731 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 19:41:49.336753  601731 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 19:41:49.336768  601731 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 19:41:49.336883  601731 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-813397"
	I1027 19:41:49.336906  601731 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-813397"
	I1027 19:41:49.336944  601731 host.go:66] Checking if "default-k8s-diff-port-813397" exists ...
	I1027 19:41:49.336961  601731 config.go:182] Loaded profile config "default-k8s-diff-port-813397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:41:49.337020  601731 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-813397"
	I1027 19:41:49.337077  601731 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-813397"
	I1027 19:41:49.337585  601731 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-813397 --format={{.State.Status}}
	I1027 19:41:49.337601  601731 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-813397 --format={{.State.Status}}
	I1027 19:41:49.338721  601731 out.go:179] * Verifying Kubernetes components...
	I1027 19:41:49.340417  601731 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:41:49.366581  601731 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:41:49.368484  601731 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:41:49.368512  601731 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 19:41:49.368577  601731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-813397
	I1027 19:41:49.368993  601731 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-813397"
	I1027 19:41:49.369042  601731 host.go:66] Checking if "default-k8s-diff-port-813397" exists ...
	I1027 19:41:49.369588  601731 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-813397 --format={{.State.Status}}
	I1027 19:41:49.403359  601731 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 19:41:49.403384  601731 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 19:41:49.403449  601731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-813397
	I1027 19:41:49.404410  601731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/default-k8s-diff-port-813397/id_rsa Username:docker}
	I1027 19:41:49.428863  601731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33450 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/default-k8s-diff-port-813397/id_rsa Username:docker}
	I1027 19:41:49.444289  601731 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 19:41:49.509786  601731 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:41:49.543593  601731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:41:49.558735  601731 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 19:41:49.669901  601731 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-813397" to be "Ready" ...
	I1027 19:41:49.670465  601731 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1027 19:41:49.910815  601731 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1027 19:41:49.911962  601731 addons.go:514] duration metric: took 575.181626ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1027 19:41:50.176449  601731 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-813397" context rescaled to 1 replicas
	W1027 19:41:51.673949  601731 node_ready.go:57] node "default-k8s-diff-port-813397" has "Ready":"False" status (will retry)
	W1027 19:41:48.963530  604470 pod_ready.go:104] pod "coredns-66bc5c9577-gwqvg" is not "Ready", error: <nil>
	W1027 19:41:50.963609  604470 pod_ready.go:104] pod "coredns-66bc5c9577-gwqvg" is not "Ready", error: <nil>
	W1027 19:41:52.963991  604470 pod_ready.go:104] pod "coredns-66bc5c9577-gwqvg" is not "Ready", error: <nil>
	I1027 19:41:54.914575  565798 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.078798379s)
	W1027 19:41:54.914611  565798 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1027 19:41:54.914619  565798 logs.go:123] Gathering logs for kube-apiserver [ca67cda12e0adb415e229ae9e136a15743c92bb79ef8987bb33523c43775a99e] ...
	I1027 19:41:54.914633  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca67cda12e0adb415e229ae9e136a15743c92bb79ef8987bb33523c43775a99e"
	I1027 19:41:54.948527  565798 logs.go:123] Gathering logs for kube-apiserver [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8] ...
	I1027 19:41:54.948570  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:41:54.984187  565798 logs.go:123] Gathering logs for kube-controller-manager [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77] ...
	I1027 19:41:54.984223  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:55.013391  565798 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:41:55.013427  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:41:55.066061  565798 logs.go:123] Gathering logs for container status ...
	I1027 19:41:55.066107  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:41:55.099106  565798 logs.go:123] Gathering logs for kubelet ...
	I1027 19:41:55.099154  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:41:55.196824  565798 logs.go:123] Gathering logs for dmesg ...
	I1027 19:41:55.196863  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:41:55.217221  565798 logs.go:123] Gathering logs for kube-scheduler [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8] ...
	I1027 19:41:55.217262  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:41:55.269370  565798 logs.go:123] Gathering logs for kube-controller-manager [4b0186426a494845ce9fa7af7755d0c2f9549f935b11a34bd738219dd3bfd4f5] ...
	I1027 19:41:55.269416  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4b0186426a494845ce9fa7af7755d0c2f9549f935b11a34bd738219dd3bfd4f5"
	W1027 19:41:53.674287  601731 node_ready.go:57] node "default-k8s-diff-port-813397" has "Ready":"False" status (will retry)
	W1027 19:41:56.173594  601731 node_ready.go:57] node "default-k8s-diff-port-813397" has "Ready":"False" status (will retry)
	W1027 19:41:55.462229  604470 pod_ready.go:104] pod "coredns-66bc5c9577-gwqvg" is not "Ready", error: <nil>
	W1027 19:41:57.464007  604470 pod_ready.go:104] pod "coredns-66bc5c9577-gwqvg" is not "Ready", error: <nil>
	I1027 19:41:57.800286  565798 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1027 19:41:58.495611  565798 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": read tcp 192.168.103.1:33390->192.168.103.2:8443: read: connection reset by peer
	I1027 19:41:58.495681  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:41:58.495740  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:41:58.529775  565798 cri.go:89] found id: "ca67cda12e0adb415e229ae9e136a15743c92bb79ef8987bb33523c43775a99e"
	I1027 19:41:58.529797  565798 cri.go:89] found id: "f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:41:58.529800  565798 cri.go:89] found id: ""
	I1027 19:41:58.529809  565798 logs.go:282] 2 containers: [ca67cda12e0adb415e229ae9e136a15743c92bb79ef8987bb33523c43775a99e f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8]
	I1027 19:41:58.529860  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:58.534513  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:58.539198  565798 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:41:58.539272  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:41:58.576850  565798 cri.go:89] found id: ""
	I1027 19:41:58.576878  565798 logs.go:282] 0 containers: []
	W1027 19:41:58.576888  565798 logs.go:284] No container was found matching "etcd"
	I1027 19:41:58.576894  565798 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:41:58.576967  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:41:58.608730  565798 cri.go:89] found id: ""
	I1027 19:41:58.608758  565798 logs.go:282] 0 containers: []
	W1027 19:41:58.608767  565798 logs.go:284] No container was found matching "coredns"
	I1027 19:41:58.608774  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:41:58.608835  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:41:58.642647  565798 cri.go:89] found id: "15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:41:58.642669  565798 cri.go:89] found id: ""
	I1027 19:41:58.642685  565798 logs.go:282] 1 containers: [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8]
	I1027 19:41:58.642745  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:58.647403  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:41:58.647540  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:41:58.678247  565798 cri.go:89] found id: ""
	I1027 19:41:58.678281  565798 logs.go:282] 0 containers: []
	W1027 19:41:58.678293  565798 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:41:58.678302  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:41:58.678362  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:41:58.710808  565798 cri.go:89] found id: "4b0186426a494845ce9fa7af7755d0c2f9549f935b11a34bd738219dd3bfd4f5"
	I1027 19:41:58.710833  565798 cri.go:89] found id: "38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:58.710840  565798 cri.go:89] found id: ""
	I1027 19:41:58.710851  565798 logs.go:282] 2 containers: [4b0186426a494845ce9fa7af7755d0c2f9549f935b11a34bd738219dd3bfd4f5 38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77]
	I1027 19:41:58.710907  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:58.715281  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:41:58.719949  565798 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:41:58.720041  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:41:58.752782  565798 cri.go:89] found id: ""
	I1027 19:41:58.752815  565798 logs.go:282] 0 containers: []
	W1027 19:41:58.752829  565798 logs.go:284] No container was found matching "kindnet"
	I1027 19:41:58.752837  565798 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:41:58.752904  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:41:58.787214  565798 cri.go:89] found id: ""
	I1027 19:41:58.787247  565798 logs.go:282] 0 containers: []
	W1027 19:41:58.787266  565798 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:41:58.787293  565798 logs.go:123] Gathering logs for kube-scheduler [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8] ...
	I1027 19:41:58.787313  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:41:58.847441  565798 logs.go:123] Gathering logs for kube-controller-manager [4b0186426a494845ce9fa7af7755d0c2f9549f935b11a34bd738219dd3bfd4f5] ...
	I1027 19:41:58.847482  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4b0186426a494845ce9fa7af7755d0c2f9549f935b11a34bd738219dd3bfd4f5"
	I1027 19:41:58.877227  565798 logs.go:123] Gathering logs for kube-controller-manager [38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77] ...
	I1027 19:41:58.877255  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 38a297db55d8c45b09d97e05e26cfe590841ea63fa7ee874cf818f8ae5fcff77"
	I1027 19:41:58.906676  565798 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:41:58.906709  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:41:58.956472  565798 logs.go:123] Gathering logs for container status ...
	I1027 19:41:58.956506  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:41:58.992084  565798 logs.go:123] Gathering logs for kubelet ...
	I1027 19:41:58.992112  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:41:59.103261  565798 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:41:59.103300  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:41:59.167726  565798 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:41:59.167749  565798 logs.go:123] Gathering logs for kube-apiserver [f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8] ...
	I1027 19:41:59.167774  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 f34fc0e2944cfceb17dc29d80d9459a84377ffb9e427b9efe6e57ccc03a358c8"
	I1027 19:41:59.207077  565798 logs.go:123] Gathering logs for dmesg ...
	I1027 19:41:59.207120  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:41:59.231280  565798 logs.go:123] Gathering logs for kube-apiserver [ca67cda12e0adb415e229ae9e136a15743c92bb79ef8987bb33523c43775a99e] ...
	I1027 19:41:59.231321  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca67cda12e0adb415e229ae9e136a15743c92bb79ef8987bb33523c43775a99e"
	W1027 19:41:58.674644  601731 node_ready.go:57] node "default-k8s-diff-port-813397" has "Ready":"False" status (will retry)
	I1027 19:42:00.673457  601731 node_ready.go:49] node "default-k8s-diff-port-813397" is "Ready"
	I1027 19:42:00.673497  601731 node_ready.go:38] duration metric: took 11.003552707s for node "default-k8s-diff-port-813397" to be "Ready" ...
	I1027 19:42:00.673549  601731 api_server.go:52] waiting for apiserver process to appear ...
	I1027 19:42:00.673617  601731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:42:00.686489  601731 api_server.go:72] duration metric: took 11.349700894s to wait for apiserver process to appear ...
	I1027 19:42:00.686525  601731 api_server.go:88] waiting for apiserver healthz status ...
	I1027 19:42:00.686549  601731 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1027 19:42:00.692401  601731 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1027 19:42:00.695549  601731 api_server.go:141] control plane version: v1.34.1
	I1027 19:42:00.695588  601731 api_server.go:131] duration metric: took 9.0538ms to wait for apiserver health ...
	I1027 19:42:00.695599  601731 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 19:42:00.698944  601731 system_pods.go:59] 8 kube-system pods found
	I1027 19:42:00.698990  601731 system_pods.go:61] "coredns-66bc5c9577-d2trp" [5445ece0-9eae-47b4-8082-3f79d585e065] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 19:42:00.698999  601731 system_pods.go:61] "etcd-default-k8s-diff-port-813397" [90566cce-4c3c-4e16-a0d3-955d91942b09] Running
	I1027 19:42:00.699007  601731 system_pods.go:61] "kindnet-hhddd" [1c4e40c1-8157-41f3-9ff0-7c2dcfa3f154] Running
	I1027 19:42:00.699012  601731 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-813397" [57074487-e665-431e-b5ac-3c1d9758ef25] Running
	I1027 19:42:00.699016  601731 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-813397" [f3233db0-178b-40d5-8cc5-80c9497a9755] Running
	I1027 19:42:00.699020  601731 system_pods.go:61] "kube-proxy-bldc8" [ed0e06ee-d1dd-4efb-8ec1-979cc70b7b23] Running
	I1027 19:42:00.699024  601731 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-813397" [259ef782-4973-4d4b-8154-0142e7a68ec6] Running
	I1027 19:42:00.699029  601731 system_pods.go:61] "storage-provisioner" [9e91fe3a-fd72-4ccb-b553-e13874944e3b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 19:42:00.699041  601731 system_pods.go:74] duration metric: took 3.435313ms to wait for pod list to return data ...
	I1027 19:42:00.699054  601731 default_sa.go:34] waiting for default service account to be created ...
	I1027 19:42:00.701207  601731 default_sa.go:45] found service account: "default"
	I1027 19:42:00.701240  601731 default_sa.go:55] duration metric: took 2.169207ms for default service account to be created ...
	I1027 19:42:00.701253  601731 system_pods.go:116] waiting for k8s-apps to be running ...
	I1027 19:42:00.704538  601731 system_pods.go:86] 8 kube-system pods found
	I1027 19:42:00.704572  601731 system_pods.go:89] "coredns-66bc5c9577-d2trp" [5445ece0-9eae-47b4-8082-3f79d585e065] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 19:42:00.704579  601731 system_pods.go:89] "etcd-default-k8s-diff-port-813397" [90566cce-4c3c-4e16-a0d3-955d91942b09] Running
	I1027 19:42:00.704585  601731 system_pods.go:89] "kindnet-hhddd" [1c4e40c1-8157-41f3-9ff0-7c2dcfa3f154] Running
	I1027 19:42:00.704589  601731 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-813397" [57074487-e665-431e-b5ac-3c1d9758ef25] Running
	I1027 19:42:00.704600  601731 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-813397" [f3233db0-178b-40d5-8cc5-80c9497a9755] Running
	I1027 19:42:00.704605  601731 system_pods.go:89] "kube-proxy-bldc8" [ed0e06ee-d1dd-4efb-8ec1-979cc70b7b23] Running
	I1027 19:42:00.704610  601731 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-813397" [259ef782-4973-4d4b-8154-0142e7a68ec6] Running
	I1027 19:42:00.704617  601731 system_pods.go:89] "storage-provisioner" [9e91fe3a-fd72-4ccb-b553-e13874944e3b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 19:42:00.704647  601731 retry.go:31] will retry after 309.472381ms: missing components: kube-dns
	I1027 19:42:01.019440  601731 system_pods.go:86] 8 kube-system pods found
	I1027 19:42:01.019484  601731 system_pods.go:89] "coredns-66bc5c9577-d2trp" [5445ece0-9eae-47b4-8082-3f79d585e065] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1027 19:42:01.019506  601731 system_pods.go:89] "etcd-default-k8s-diff-port-813397" [90566cce-4c3c-4e16-a0d3-955d91942b09] Running
	I1027 19:42:01.019516  601731 system_pods.go:89] "kindnet-hhddd" [1c4e40c1-8157-41f3-9ff0-7c2dcfa3f154] Running
	I1027 19:42:01.019522  601731 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-813397" [57074487-e665-431e-b5ac-3c1d9758ef25] Running
	I1027 19:42:01.019529  601731 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-813397" [f3233db0-178b-40d5-8cc5-80c9497a9755] Running
	I1027 19:42:01.019535  601731 system_pods.go:89] "kube-proxy-bldc8" [ed0e06ee-d1dd-4efb-8ec1-979cc70b7b23] Running
	I1027 19:42:01.019542  601731 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-813397" [259ef782-4973-4d4b-8154-0142e7a68ec6] Running
	I1027 19:42:01.019554  601731 system_pods.go:89] "storage-provisioner" [9e91fe3a-fd72-4ccb-b553-e13874944e3b] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1027 19:42:01.019577  601731 retry.go:31] will retry after 354.536397ms: missing components: kube-dns
	I1027 19:42:01.378723  601731 system_pods.go:86] 8 kube-system pods found
	I1027 19:42:01.378759  601731 system_pods.go:89] "coredns-66bc5c9577-d2trp" [5445ece0-9eae-47b4-8082-3f79d585e065] Running
	I1027 19:42:01.378769  601731 system_pods.go:89] "etcd-default-k8s-diff-port-813397" [90566cce-4c3c-4e16-a0d3-955d91942b09] Running
	I1027 19:42:01.378776  601731 system_pods.go:89] "kindnet-hhddd" [1c4e40c1-8157-41f3-9ff0-7c2dcfa3f154] Running
	I1027 19:42:01.378782  601731 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-813397" [57074487-e665-431e-b5ac-3c1d9758ef25] Running
	I1027 19:42:01.378787  601731 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-813397" [f3233db0-178b-40d5-8cc5-80c9497a9755] Running
	I1027 19:42:01.378793  601731 system_pods.go:89] "kube-proxy-bldc8" [ed0e06ee-d1dd-4efb-8ec1-979cc70b7b23] Running
	I1027 19:42:01.378798  601731 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-813397" [259ef782-4973-4d4b-8154-0142e7a68ec6] Running
	I1027 19:42:01.378807  601731 system_pods.go:89] "storage-provisioner" [9e91fe3a-fd72-4ccb-b553-e13874944e3b] Running
	I1027 19:42:01.378817  601731 system_pods.go:126] duration metric: took 677.556843ms to wait for k8s-apps to be running ...
	I1027 19:42:01.378829  601731 system_svc.go:44] waiting for kubelet service to be running ....
	I1027 19:42:01.378880  601731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:42:01.394427  601731 system_svc.go:56] duration metric: took 15.585393ms WaitForService to wait for kubelet
	I1027 19:42:01.394466  601731 kubeadm.go:586] duration metric: took 12.057686791s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 19:42:01.394493  601731 node_conditions.go:102] verifying NodePressure condition ...
	I1027 19:42:01.397886  601731 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1027 19:42:01.397913  601731 node_conditions.go:123] node cpu capacity is 8
	I1027 19:42:01.397928  601731 node_conditions.go:105] duration metric: took 3.430192ms to run NodePressure ...
	I1027 19:42:01.397942  601731 start.go:241] waiting for startup goroutines ...
	I1027 19:42:01.397949  601731 start.go:246] waiting for cluster config update ...
	I1027 19:42:01.397958  601731 start.go:255] writing updated cluster config ...
	I1027 19:42:01.398251  601731 ssh_runner.go:195] Run: rm -f paused
	I1027 19:42:01.402604  601731 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 19:42:01.406666  601731 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-d2trp" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:42:01.411993  601731 pod_ready.go:94] pod "coredns-66bc5c9577-d2trp" is "Ready"
	I1027 19:42:01.412025  601731 pod_ready.go:86] duration metric: took 5.33041ms for pod "coredns-66bc5c9577-d2trp" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:42:01.414327  601731 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-813397" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:42:01.419485  601731 pod_ready.go:94] pod "etcd-default-k8s-diff-port-813397" is "Ready"
	I1027 19:42:01.419523  601731 pod_ready.go:86] duration metric: took 5.165543ms for pod "etcd-default-k8s-diff-port-813397" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:42:01.422243  601731 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-813397" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:42:01.426933  601731 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-813397" is "Ready"
	I1027 19:42:01.426965  601731 pod_ready.go:86] duration metric: took 4.696991ms for pod "kube-apiserver-default-k8s-diff-port-813397" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:42:01.429475  601731 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-813397" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:42:01.807880  601731 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-813397" is "Ready"
	I1027 19:42:01.807917  601731 pod_ready.go:86] duration metric: took 378.410766ms for pod "kube-controller-manager-default-k8s-diff-port-813397" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:42:02.007707  601731 pod_ready.go:83] waiting for pod "kube-proxy-bldc8" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:42:02.407848  601731 pod_ready.go:94] pod "kube-proxy-bldc8" is "Ready"
	I1027 19:42:02.407879  601731 pod_ready.go:86] duration metric: took 400.141946ms for pod "kube-proxy-bldc8" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:42:02.608029  601731 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-813397" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:42:03.007641  601731 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-813397" is "Ready"
	I1027 19:42:03.007678  601731 pod_ready.go:86] duration metric: took 399.609776ms for pod "kube-scheduler-default-k8s-diff-port-813397" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:42:03.007690  601731 pod_ready.go:40] duration metric: took 1.605050143s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 19:42:03.056467  601731 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 19:42:03.059178  601731 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-813397" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 27 19:41:33 embed-certs-919237 crio[562]: time="2025-10-27T19:41:33.633898461Z" level=info msg="Started container" PID=1741 containerID=f70805b0b88103b08166e7fb24c18ab35ac0ae9d3e987fd54ce24c8fe1b50a8f description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qb5z6/dashboard-metrics-scraper id=b1d90286-0d4c-47b8-b35e-e3af644f7cf7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1121eaa687445082ad2164e1d4dc89ed12615bcd2dd456384d547490ee0c7b81
	Oct 27 19:41:33 embed-certs-919237 crio[562]: time="2025-10-27T19:41:33.68140667Z" level=info msg="Removing container: 607816533ca5535179033ea14ae82c8f1c3039cada24e488c97062628661396f" id=70a0873f-da21-4f05-a522-539b9cb28127 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 19:41:33 embed-certs-919237 crio[562]: time="2025-10-27T19:41:33.696725047Z" level=info msg="Removed container 607816533ca5535179033ea14ae82c8f1c3039cada24e488c97062628661396f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qb5z6/dashboard-metrics-scraper" id=70a0873f-da21-4f05-a522-539b9cb28127 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 19:41:41 embed-certs-919237 crio[562]: time="2025-10-27T19:41:41.706583157Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=7edb807b-576a-46af-839b-32a167546bea name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:41:41 embed-certs-919237 crio[562]: time="2025-10-27T19:41:41.708119221Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=fed2c443-d141-4132-b8e6-e09560cb6b80 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:41:41 embed-certs-919237 crio[562]: time="2025-10-27T19:41:41.71015571Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=b10b0aec-5893-4c36-b496-e7cdcea0e1df name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:41:41 embed-certs-919237 crio[562]: time="2025-10-27T19:41:41.710379932Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:41:41 embed-certs-919237 crio[562]: time="2025-10-27T19:41:41.719496525Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:41:41 embed-certs-919237 crio[562]: time="2025-10-27T19:41:41.719738577Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/5ff25519adabaf3f994071fdc4fd8066ef7900d1fb52a28fcf21a8fd6089bc16/merged/etc/passwd: no such file or directory"
	Oct 27 19:41:41 embed-certs-919237 crio[562]: time="2025-10-27T19:41:41.719785172Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/5ff25519adabaf3f994071fdc4fd8066ef7900d1fb52a28fcf21a8fd6089bc16/merged/etc/group: no such file or directory"
	Oct 27 19:41:41 embed-certs-919237 crio[562]: time="2025-10-27T19:41:41.720120631Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:41:41 embed-certs-919237 crio[562]: time="2025-10-27T19:41:41.764562884Z" level=info msg="Created container 039af7dcecc8a433ded3d11e5ded2256d549ee2d08a3ebb68b26fce310e7bc20: kube-system/storage-provisioner/storage-provisioner" id=b10b0aec-5893-4c36-b496-e7cdcea0e1df name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:41:41 embed-certs-919237 crio[562]: time="2025-10-27T19:41:41.766574379Z" level=info msg="Starting container: 039af7dcecc8a433ded3d11e5ded2256d549ee2d08a3ebb68b26fce310e7bc20" id=0ed0e154-2d70-4eef-9c41-afb3c14df8de name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:41:41 embed-certs-919237 crio[562]: time="2025-10-27T19:41:41.769813639Z" level=info msg="Started container" PID=1755 containerID=039af7dcecc8a433ded3d11e5ded2256d549ee2d08a3ebb68b26fce310e7bc20 description=kube-system/storage-provisioner/storage-provisioner id=0ed0e154-2d70-4eef-9c41-afb3c14df8de name=/runtime.v1.RuntimeService/StartContainer sandboxID=4e5e19a9b8e1f5a7f24e4acbb89c648fc78cb8cb1c6415f77ef836545f40a990
	Oct 27 19:41:56 embed-certs-919237 crio[562]: time="2025-10-27T19:41:56.572516552Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3c84f4fd-b22d-437a-a954-3c0c53bace92 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:41:56 embed-certs-919237 crio[562]: time="2025-10-27T19:41:56.573797233Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d6bca718-82fc-4cae-ba30-2389428a467e name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:41:56 embed-certs-919237 crio[562]: time="2025-10-27T19:41:56.575036695Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qb5z6/dashboard-metrics-scraper" id=40aa1a56-9beb-45a7-b8e3-ee909c2e390b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:41:56 embed-certs-919237 crio[562]: time="2025-10-27T19:41:56.575206032Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:41:56 embed-certs-919237 crio[562]: time="2025-10-27T19:41:56.582199622Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:41:56 embed-certs-919237 crio[562]: time="2025-10-27T19:41:56.582926683Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:41:56 embed-certs-919237 crio[562]: time="2025-10-27T19:41:56.613979854Z" level=info msg="Created container 2796a5fed0754fd4b112fae38588dfe25b86705e56508393208766dc3b088d33: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qb5z6/dashboard-metrics-scraper" id=40aa1a56-9beb-45a7-b8e3-ee909c2e390b name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:41:56 embed-certs-919237 crio[562]: time="2025-10-27T19:41:56.614809567Z" level=info msg="Starting container: 2796a5fed0754fd4b112fae38588dfe25b86705e56508393208766dc3b088d33" id=808d8052-d27b-4694-b551-0128bb25d4e1 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:41:56 embed-certs-919237 crio[562]: time="2025-10-27T19:41:56.616670453Z" level=info msg="Started container" PID=1788 containerID=2796a5fed0754fd4b112fae38588dfe25b86705e56508393208766dc3b088d33 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qb5z6/dashboard-metrics-scraper id=808d8052-d27b-4694-b551-0128bb25d4e1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1121eaa687445082ad2164e1d4dc89ed12615bcd2dd456384d547490ee0c7b81
	Oct 27 19:41:56 embed-certs-919237 crio[562]: time="2025-10-27T19:41:56.751448703Z" level=info msg="Removing container: f70805b0b88103b08166e7fb24c18ab35ac0ae9d3e987fd54ce24c8fe1b50a8f" id=710e56e9-257a-4d75-acf2-8240a5659b13 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 19:41:56 embed-certs-919237 crio[562]: time="2025-10-27T19:41:56.764683393Z" level=info msg="Removed container f70805b0b88103b08166e7fb24c18ab35ac0ae9d3e987fd54ce24c8fe1b50a8f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qb5z6/dashboard-metrics-scraper" id=710e56e9-257a-4d75-acf2-8240a5659b13 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	2796a5fed0754       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           7 seconds ago       Exited              dashboard-metrics-scraper   3                   1121eaa687445       dashboard-metrics-scraper-6ffb444bf9-qb5z6   kubernetes-dashboard
	039af7dcecc8a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   4e5e19a9b8e1f       storage-provisioner                          kube-system
	121601c64b1f8       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   42 seconds ago      Running             kubernetes-dashboard        0                   5bca0b94b7119       kubernetes-dashboard-855c9754f9-sctm4        kubernetes-dashboard
	7e47ca072fa91       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           52 seconds ago      Running             coredns                     0                   6b7bb63d45217       coredns-66bc5c9577-9b9tz                     kube-system
	6311ca5e86acb       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   137559fcc7bae       busybox                                      default
	289d461e95e5c       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   fad710d9d64d2       kindnet-6jx4q                                kube-system
	11808765eb85f       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           52 seconds ago      Running             kube-proxy                  0                   c7e57c3fd7398       kube-proxy-rrq2h                             kube-system
	ae6c32d15d0a3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   4e5e19a9b8e1f       storage-provisioner                          kube-system
	d5a5c65a74b4b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           55 seconds ago      Running             etcd                        0                   25c09d0d6cb26       etcd-embed-certs-919237                      kube-system
	f0dcb6f33c4a1       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           55 seconds ago      Running             kube-controller-manager     0                   6de95d026b3ce       kube-controller-manager-embed-certs-919237   kube-system
	d17bd312e4c2b       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           55 seconds ago      Running             kube-scheduler              0                   e0f4890391b83       kube-scheduler-embed-certs-919237            kube-system
	31682e1eceede       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           55 seconds ago      Running             kube-apiserver              0                   8f05230bb8da1       kube-apiserver-embed-certs-919237            kube-system
	
	
	==> coredns [7e47ca072fa9116cec1fe31e6e1e2cc19a4993f2a1a0cb5170d906761e491b77] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34662 - 15118 "HINFO IN 955905167667149728.6821744566514543240. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.062006688s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-919237
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-919237
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=embed-certs-919237
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T19_40_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 19:40:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-919237
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 19:41:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 19:41:40 +0000   Mon, 27 Oct 2025 19:40:09 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 19:41:40 +0000   Mon, 27 Oct 2025 19:40:09 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 19:41:40 +0000   Mon, 27 Oct 2025 19:40:09 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 19:41:40 +0000   Mon, 27 Oct 2025 19:40:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-919237
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                2eeadcca-8dc6-4ff3-aae9-45c8a87361ee
	  Boot ID:                    811bd29c-e64e-4acc-9427-bab1f7caed93
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-9b9tz                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     104s
	  kube-system                 etcd-embed-certs-919237                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         110s
	  kube-system                 kindnet-6jx4q                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-embed-certs-919237             250m (3%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-embed-certs-919237    200m (2%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-proxy-rrq2h                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-embed-certs-919237             100m (1%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-qb5z6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-sctm4         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 103s               kube-proxy       
	  Normal  Starting                 52s                kube-proxy       
	  Normal  NodeHasSufficientMemory  110s               kubelet          Node embed-certs-919237 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    110s               kubelet          Node embed-certs-919237 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     110s               kubelet          Node embed-certs-919237 status is now: NodeHasSufficientPID
	  Normal  Starting                 110s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           105s               node-controller  Node embed-certs-919237 event: Registered Node embed-certs-919237 in Controller
	  Normal  NodeReady                93s                kubelet          Node embed-certs-919237 status is now: NodeReady
	  Normal  Starting                 56s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 56s)  kubelet          Node embed-certs-919237 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)  kubelet          Node embed-certs-919237 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 56s)  kubelet          Node embed-certs-919237 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           50s                node-controller  Node embed-certs-919237 event: Registered Node embed-certs-919237 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 23 52 43 9a ba 08 06
	[  +0.000398] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 50 95 0e df 53 08 06
	[Oct27 18:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.017295] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023893] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023882] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023851] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +2.047849] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +4.031592] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +8.319143] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[ +16.382183] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[Oct27 19:00] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	
	
	==> etcd [d5a5c65a74b4b0bac782941ddf5cfc5e1c95eb29dbc563a89bc74143a3d75be8] <==
	{"level":"warn","ts":"2025-10-27T19:41:09.408337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:09.414616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:09.421711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:09.427986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:09.434622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:09.440869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:09.447427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:09.460694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:09.467006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:09.473982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:09.480851Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:09.494018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:09.502213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:09.512788Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:09.555193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60468","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-27T19:41:27.443077Z","caller":"traceutil/trace.go:172","msg":"trace[1490949874] transaction","detail":"{read_only:false; response_revision:593; number_of_response:1; }","duration":"127.606157ms","start":"2025-10-27T19:41:27.315448Z","end":"2025-10-27T19:41:27.443054Z","steps":["trace[1490949874] 'process raft request'  (duration: 127.408327ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T19:41:27.736953Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"177.870522ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-9b9tz\" limit:1 ","response":"range_response_count:1 size:5934"}
	{"level":"info","ts":"2025-10-27T19:41:27.737039Z","caller":"traceutil/trace.go:172","msg":"trace[355912980] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-9b9tz; range_end:; response_count:1; response_revision:593; }","duration":"177.988219ms","start":"2025-10-27T19:41:27.559037Z","end":"2025-10-27T19:41:27.737025Z","steps":["trace[355912980] 'range keys from in-memory index tree'  (duration: 177.728439ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T19:41:28.112112Z","caller":"traceutil/trace.go:172","msg":"trace[668865601] linearizableReadLoop","detail":"{readStateIndex:623; appliedIndex:623; }","duration":"181.011244ms","start":"2025-10-27T19:41:27.931067Z","end":"2025-10-27T19:41:28.112078Z","steps":["trace[668865601] 'read index received'  (duration: 180.996694ms)","trace[668865601] 'applied index is now lower than readState.Index'  (duration: 12.78µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T19:41:28.112245Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"181.156974ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T19:41:28.112299Z","caller":"traceutil/trace.go:172","msg":"trace[1292005042] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:593; }","duration":"181.227992ms","start":"2025-10-27T19:41:27.931055Z","end":"2025-10-27T19:41:28.112283Z","steps":["trace[1292005042] 'agreement among raft nodes before linearized reading'  (duration: 181.108114ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T19:41:28.112382Z","caller":"traceutil/trace.go:172","msg":"trace[1930864654] transaction","detail":"{read_only:false; response_revision:594; number_of_response:1; }","duration":"222.973753ms","start":"2025-10-27T19:41:27.889397Z","end":"2025-10-27T19:41:28.112371Z","steps":["trace[1930864654] 'process raft request'  (duration: 222.783092ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T19:41:28.324913Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.986421ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T19:41:28.325012Z","caller":"traceutil/trace.go:172","msg":"trace[862790909] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:595; }","duration":"115.094657ms","start":"2025-10-27T19:41:28.209899Z","end":"2025-10-27T19:41:28.324993Z","steps":["trace[862790909] 'agreement among raft nodes before linearized reading'  (duration: 84.950967ms)","trace[862790909] 'range keys from in-memory index tree'  (duration: 30.012146ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-27T19:41:28.325072Z","caller":"traceutil/trace.go:172","msg":"trace[420978482] transaction","detail":"{read_only:false; response_revision:596; number_of_response:1; }","duration":"173.831689ms","start":"2025-10-27T19:41:28.151221Z","end":"2025-10-27T19:41:28.325053Z","steps":["trace[420978482] 'process raft request'  (duration: 143.678051ms)","trace[420978482] 'compare'  (duration: 30.036536ms)"],"step_count":2}
	
	
	==> kernel <==
	 19:42:03 up  2:24,  0 user,  load average: 4.53, 3.43, 2.19
	Linux embed-certs-919237 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [289d461e95e5c9245c97d39c39a8fdc2ca0d89a5aaf6adc05990cee406a99fc5] <==
	I1027 19:41:11.156789       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 19:41:11.157056       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1027 19:41:11.157297       1 main.go:148] setting mtu 1500 for CNI 
	I1027 19:41:11.157321       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 19:41:11.157356       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T19:41:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 19:41:11.358633       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 19:41:11.359863       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 19:41:11.359904       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 19:41:11.360017       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1027 19:41:11.814215       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 19:41:11.814243       1 metrics.go:72] Registering metrics
	I1027 19:41:11.814330       1 controller.go:711] "Syncing nftables rules"
	I1027 19:41:21.358406       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1027 19:41:21.358473       1 main.go:301] handling current node
	I1027 19:41:31.360276       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1027 19:41:31.360332       1 main.go:301] handling current node
	I1027 19:41:41.359353       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1027 19:41:41.359398       1 main.go:301] handling current node
	I1027 19:41:51.364213       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1027 19:41:51.364263       1 main.go:301] handling current node
	I1027 19:42:01.362248       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1027 19:42:01.362281       1 main.go:301] handling current node
	
	
	==> kube-apiserver [31682e1eceede1979fd31aa2e96a71541d29f7d036de012b0c0a406025482670] <==
	I1027 19:41:10.037965       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1027 19:41:10.038245       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1027 19:41:10.038272       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1027 19:41:10.038371       1 aggregator.go:171] initial CRD sync complete...
	I1027 19:41:10.038380       1 autoregister_controller.go:144] Starting autoregister controller
	I1027 19:41:10.038386       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 19:41:10.038392       1 cache.go:39] Caches are synced for autoregister controller
	I1027 19:41:10.044601       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 19:41:10.045913       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1027 19:41:10.056368       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1027 19:41:10.056409       1 policy_source.go:240] refreshing policies
	I1027 19:41:10.075919       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 19:41:10.089623       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 19:41:10.305951       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 19:41:10.338077       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 19:41:10.360039       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 19:41:10.370474       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 19:41:10.379070       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 19:41:10.414697       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.121.163"}
	I1027 19:41:10.427826       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.242.249"}
	I1027 19:41:10.941920       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 19:41:13.828682       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 19:41:13.875561       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 19:41:13.875560       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 19:41:13.924993       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [f0dcb6f33c4a16c8aabf1c9522c219dfe57ce0438d6eedb8d11b3bbed06bf220] <==
	I1027 19:41:13.359395       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1027 19:41:13.359403       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1027 19:41:13.361512       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1027 19:41:13.364768       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 19:41:13.371985       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 19:41:13.372015       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 19:41:13.372051       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 19:41:13.372101       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 19:41:13.372126       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 19:41:13.372196       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 19:41:13.372307       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 19:41:13.372398       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 19:41:13.372414       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-919237"
	I1027 19:41:13.372468       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1027 19:41:13.372486       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 19:41:13.374032       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1027 19:41:13.376295       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1027 19:41:13.376367       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 19:41:13.376382       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 19:41:13.376394       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1027 19:41:13.378087       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 19:41:13.378183       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 19:41:13.378396       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 19:41:13.395928       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 19:41:13.407997       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	
	
	==> kube-proxy [11808765eb85f990868220937b5849982fa806cf6e9924886c92e66e31f11278] <==
	I1027 19:41:10.970176       1 server_linux.go:53] "Using iptables proxy"
	I1027 19:41:11.041597       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 19:41:11.142128       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 19:41:11.142175       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1027 19:41:11.142270       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 19:41:11.164955       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 19:41:11.165035       1 server_linux.go:132] "Using iptables Proxier"
	I1027 19:41:11.171471       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 19:41:11.172053       1 server.go:527] "Version info" version="v1.34.1"
	I1027 19:41:11.172115       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:41:11.174124       1 config.go:200] "Starting service config controller"
	I1027 19:41:11.174716       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 19:41:11.174211       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 19:41:11.174747       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 19:41:11.174238       1 config.go:106] "Starting endpoint slice config controller"
	I1027 19:41:11.174770       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 19:41:11.174490       1 config.go:309] "Starting node config controller"
	I1027 19:41:11.174780       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 19:41:11.174786       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 19:41:11.274923       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 19:41:11.274942       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 19:41:11.274986       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [d17bd312e4c2b6e68ce5e1c0006ad10d3d74b77c3bc3e8570e4526763c6914a9] <==
	I1027 19:41:08.557058       1 serving.go:386] Generated self-signed cert in-memory
	W1027 19:41:09.963464       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1027 19:41:09.963499       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1027 19:41:09.963523       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1027 19:41:09.963534       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1027 19:41:10.005975       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 19:41:10.006008       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:41:10.015388       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 19:41:10.015988       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:41:10.016045       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:41:10.016096       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1027 19:41:10.019612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1027 19:41:10.116229       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 19:41:14 embed-certs-919237 kubelet[720]: I1027 19:41:14.883222     720 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 27 19:41:17 embed-certs-919237 kubelet[720]: I1027 19:41:17.625944     720 scope.go:117] "RemoveContainer" containerID="0a9341ea4c1d6d89534690aa36d40f6987355ccc1e64e5063dca8b719048370c"
	Oct 27 19:41:18 embed-certs-919237 kubelet[720]: I1027 19:41:18.631128     720 scope.go:117] "RemoveContainer" containerID="0a9341ea4c1d6d89534690aa36d40f6987355ccc1e64e5063dca8b719048370c"
	Oct 27 19:41:18 embed-certs-919237 kubelet[720]: I1027 19:41:18.631296     720 scope.go:117] "RemoveContainer" containerID="607816533ca5535179033ea14ae82c8f1c3039cada24e488c97062628661396f"
	Oct 27 19:41:18 embed-certs-919237 kubelet[720]: E1027 19:41:18.631494     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qb5z6_kubernetes-dashboard(d40c29c2-2116-4b6c-bb4b-3fceda111717)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qb5z6" podUID="d40c29c2-2116-4b6c-bb4b-3fceda111717"
	Oct 27 19:41:19 embed-certs-919237 kubelet[720]: I1027 19:41:19.636185     720 scope.go:117] "RemoveContainer" containerID="607816533ca5535179033ea14ae82c8f1c3039cada24e488c97062628661396f"
	Oct 27 19:41:19 embed-certs-919237 kubelet[720]: E1027 19:41:19.636386     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qb5z6_kubernetes-dashboard(d40c29c2-2116-4b6c-bb4b-3fceda111717)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qb5z6" podUID="d40c29c2-2116-4b6c-bb4b-3fceda111717"
	Oct 27 19:41:21 embed-certs-919237 kubelet[720]: I1027 19:41:21.673860     720 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-sctm4" podStartSLOduration=2.09079402 podStartE2EDuration="8.673836702s" podCreationTimestamp="2025-10-27 19:41:13 +0000 UTC" firstStartedPulling="2025-10-27 19:41:14.359582828 +0000 UTC m=+6.892491790" lastFinishedPulling="2025-10-27 19:41:20.942625499 +0000 UTC m=+13.475534472" observedRunningTime="2025-10-27 19:41:21.673503947 +0000 UTC m=+14.206412928" watchObservedRunningTime="2025-10-27 19:41:21.673836702 +0000 UTC m=+14.206745698"
	Oct 27 19:41:22 embed-certs-919237 kubelet[720]: I1027 19:41:22.314436     720 scope.go:117] "RemoveContainer" containerID="607816533ca5535179033ea14ae82c8f1c3039cada24e488c97062628661396f"
	Oct 27 19:41:22 embed-certs-919237 kubelet[720]: E1027 19:41:22.314661     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qb5z6_kubernetes-dashboard(d40c29c2-2116-4b6c-bb4b-3fceda111717)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qb5z6" podUID="d40c29c2-2116-4b6c-bb4b-3fceda111717"
	Oct 27 19:41:33 embed-certs-919237 kubelet[720]: I1027 19:41:33.571739     720 scope.go:117] "RemoveContainer" containerID="607816533ca5535179033ea14ae82c8f1c3039cada24e488c97062628661396f"
	Oct 27 19:41:33 embed-certs-919237 kubelet[720]: I1027 19:41:33.679012     720 scope.go:117] "RemoveContainer" containerID="607816533ca5535179033ea14ae82c8f1c3039cada24e488c97062628661396f"
	Oct 27 19:41:33 embed-certs-919237 kubelet[720]: I1027 19:41:33.679317     720 scope.go:117] "RemoveContainer" containerID="f70805b0b88103b08166e7fb24c18ab35ac0ae9d3e987fd54ce24c8fe1b50a8f"
	Oct 27 19:41:33 embed-certs-919237 kubelet[720]: E1027 19:41:33.679533     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qb5z6_kubernetes-dashboard(d40c29c2-2116-4b6c-bb4b-3fceda111717)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qb5z6" podUID="d40c29c2-2116-4b6c-bb4b-3fceda111717"
	Oct 27 19:41:41 embed-certs-919237 kubelet[720]: I1027 19:41:41.705504     720 scope.go:117] "RemoveContainer" containerID="ae6c32d15d0a354896e509d903d2913f4e4cb318fee7570b0a381a4da1276a5b"
	Oct 27 19:41:42 embed-certs-919237 kubelet[720]: I1027 19:41:42.315220     720 scope.go:117] "RemoveContainer" containerID="f70805b0b88103b08166e7fb24c18ab35ac0ae9d3e987fd54ce24c8fe1b50a8f"
	Oct 27 19:41:42 embed-certs-919237 kubelet[720]: E1027 19:41:42.315441     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qb5z6_kubernetes-dashboard(d40c29c2-2116-4b6c-bb4b-3fceda111717)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qb5z6" podUID="d40c29c2-2116-4b6c-bb4b-3fceda111717"
	Oct 27 19:41:56 embed-certs-919237 kubelet[720]: I1027 19:41:56.571757     720 scope.go:117] "RemoveContainer" containerID="f70805b0b88103b08166e7fb24c18ab35ac0ae9d3e987fd54ce24c8fe1b50a8f"
	Oct 27 19:41:56 embed-certs-919237 kubelet[720]: I1027 19:41:56.749969     720 scope.go:117] "RemoveContainer" containerID="f70805b0b88103b08166e7fb24c18ab35ac0ae9d3e987fd54ce24c8fe1b50a8f"
	Oct 27 19:41:56 embed-certs-919237 kubelet[720]: I1027 19:41:56.750258     720 scope.go:117] "RemoveContainer" containerID="2796a5fed0754fd4b112fae38588dfe25b86705e56508393208766dc3b088d33"
	Oct 27 19:41:56 embed-certs-919237 kubelet[720]: E1027 19:41:56.750495     720 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qb5z6_kubernetes-dashboard(d40c29c2-2116-4b6c-bb4b-3fceda111717)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qb5z6" podUID="d40c29c2-2116-4b6c-bb4b-3fceda111717"
	Oct 27 19:41:58 embed-certs-919237 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 19:41:58 embed-certs-919237 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 19:41:58 embed-certs-919237 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 27 19:41:58 embed-certs-919237 systemd[1]: kubelet.service: Consumed 1.834s CPU time.
	
	
	==> kubernetes-dashboard [121601c64b1f8275f26411958ad9a6732beea758cb85fefc8db2ea3c291abd87] <==
	2025/10/27 19:41:21 Using namespace: kubernetes-dashboard
	2025/10/27 19:41:21 Using in-cluster config to connect to apiserver
	2025/10/27 19:41:21 Using secret token for csrf signing
	2025/10/27 19:41:21 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/27 19:41:21 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/27 19:41:21 Successful initial request to the apiserver, version: v1.34.1
	2025/10/27 19:41:21 Generating JWE encryption key
	2025/10/27 19:41:21 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/27 19:41:21 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/27 19:41:21 Initializing JWE encryption key from synchronized object
	2025/10/27 19:41:21 Creating in-cluster Sidecar client
	2025/10/27 19:41:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 19:41:21 Serving insecurely on HTTP port: 9090
	2025/10/27 19:41:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 19:41:21 Starting overwatch
	
	
	==> storage-provisioner [039af7dcecc8a433ded3d11e5ded2256d549ee2d08a3ebb68b26fce310e7bc20] <==
	I1027 19:41:41.788478       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1027 19:41:41.803885       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 19:41:41.803939       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1027 19:41:41.806865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:41:45.263531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:41:49.528191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:41:53.126241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:41:56.179485       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:41:59.201974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:41:59.210894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 19:41:59.211044       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 19:41:59.211156       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ea57f8f9-31a7-4033-9918-213289abc41f", APIVersion:"v1", ResourceVersion:"630", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-919237_524297ae-b48b-4840-a52f-029d1cfb1769 became leader
	I1027 19:41:59.211253       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-919237_524297ae-b48b-4840-a52f-029d1cfb1769!
	W1027 19:41:59.215584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:41:59.221190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 19:41:59.311597       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-919237_524297ae-b48b-4840-a52f-029d1cfb1769!
	W1027 19:42:01.224608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:42:01.232173       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:42:03.235662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:42:03.240474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ae6c32d15d0a354896e509d903d2913f4e4cb318fee7570b0a381a4da1276a5b] <==
	I1027 19:41:10.926489       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1027 19:41:40.932573       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-919237 -n embed-certs-919237
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-919237 -n embed-certs-919237: exit status 2 (360.364834ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-919237 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-813397 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-813397 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (768.672055ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:42:11Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-813397 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-813397 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-813397 describe deploy/metrics-server -n kube-system: exit status 1 (60.109934ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-813397 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-813397
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-813397:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e2892d7a5b76c311108e309f0c5e79b46c633c41881cd99a81040580e9d6de8",
	        "Created": "2025-10-27T19:41:28.530867062Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 602634,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T19:41:28.570671395Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/5e2892d7a5b76c311108e309f0c5e79b46c633c41881cd99a81040580e9d6de8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e2892d7a5b76c311108e309f0c5e79b46c633c41881cd99a81040580e9d6de8/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e2892d7a5b76c311108e309f0c5e79b46c633c41881cd99a81040580e9d6de8/hosts",
	        "LogPath": "/var/lib/docker/containers/5e2892d7a5b76c311108e309f0c5e79b46c633c41881cd99a81040580e9d6de8/5e2892d7a5b76c311108e309f0c5e79b46c633c41881cd99a81040580e9d6de8-json.log",
	        "Name": "/default-k8s-diff-port-813397",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-813397:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-813397",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5e2892d7a5b76c311108e309f0c5e79b46c633c41881cd99a81040580e9d6de8",
	                "LowerDir": "/var/lib/docker/overlay2/9c29b2ca181e37783386969900349b6f8ee825583f284e5f7ca2046e8e79ccce-init/diff:/var/lib/docker/overlay2/71b61ec94610a35f2d924dec358052d4c154c36b3fe219802f60246ca2dc7f45/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9c29b2ca181e37783386969900349b6f8ee825583f284e5f7ca2046e8e79ccce/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9c29b2ca181e37783386969900349b6f8ee825583f284e5f7ca2046e8e79ccce/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9c29b2ca181e37783386969900349b6f8ee825583f284e5f7ca2046e8e79ccce/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-813397",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-813397/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-813397",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-813397",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-813397",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bfea639083b893f328799e4aafa28ea31cf6c4a4afbaea26ef7080e91fdc84f7",
	            "SandboxKey": "/var/run/docker/netns/bfea639083b8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-813397": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:25:20:81:ba:6c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e5c60f1f40aedba9b9761254cb4dc4ea11830e317d7c1ef05baf77a39a5733c7",
	                    "EndpointID": "cfd0cdfc3ddb7a6ff5df260ef9578d5a571654004ea2f5370e7b890929e711bc",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-813397",
	                        "5e2892d7a5b7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-813397 -n default-k8s-diff-port-813397
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-813397 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-813397 logs -n 25: (1.674884673s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ functional-051715 image load --daemon kicbase/echo-server:functional-051715 --alsologtostderr                                                                                                                                                 │ functional-051715            │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image   │ functional-051715 image ls                                                                                                                                                                                                                    │ functional-051715            │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image   │ functional-051715 image load --daemon kicbase/echo-server:functional-051715 --alsologtostderr                                                                                                                                                 │ functional-051715            │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image   │ functional-051715 image ls                                                                                                                                                                                                                    │ functional-051715            │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image   │ functional-051715 image load --daemon kicbase/echo-server:functional-051715 --alsologtostderr                                                                                                                                                 │ functional-051715            │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image   │ functional-051715 image ls                                                                                                                                                                                                                    │ functional-051715            │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image   │ functional-051715 image save kicbase/echo-server:functional-051715 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                                                               │ functional-051715            │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image   │ functional-051715 image rm kicbase/echo-server:functional-051715 --alsologtostderr                                                                                                                                                            │ functional-051715            │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ addons  │ enable dashboard -p embed-certs-919237 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ start   │ -p embed-certs-919237 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ image   │ old-k8s-version-468959 image list --format=json                                                                                                                                                                                               │ old-k8s-version-468959       │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ pause   │ -p old-k8s-version-468959 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-468959       │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-095885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │                     │
	│ stop    │ -p no-preload-095885 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ delete  │ -p old-k8s-version-468959                                                                                                                                                                                                                     │ old-k8s-version-468959       │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ delete  │ -p old-k8s-version-468959                                                                                                                                                                                                                     │ old-k8s-version-468959       │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ start   │ -p default-k8s-diff-port-813397 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-813397 │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:42 UTC │
	│ addons  │ enable dashboard -p no-preload-095885 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ start   │ -p no-preload-095885 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │                     │
	│ image   │ embed-certs-919237 image list --format=json                                                                                                                                                                                                   │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ pause   │ -p embed-certs-919237 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │                     │
	│ delete  │ -p embed-certs-919237                                                                                                                                                                                                                         │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ delete  │ -p embed-certs-919237                                                                                                                                                                                                                         │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ start   │ -p newest-cni-677710 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-677710            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-813397 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-813397 │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 19:42:07
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 19:42:07.812041  611121 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:42:07.812363  611121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:42:07.812374  611121 out.go:374] Setting ErrFile to fd 2...
	I1027 19:42:07.812378  611121 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:42:07.812573  611121 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 19:42:07.813083  611121 out.go:368] Setting JSON to false
	I1027 19:42:07.814471  611121 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8677,"bootTime":1761585451,"procs":462,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 19:42:07.814590  611121 start.go:141] virtualization: kvm guest
	I1027 19:42:07.816886  611121 out.go:179] * [newest-cni-677710] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 19:42:07.818436  611121 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 19:42:07.818478  611121 notify.go:220] Checking for updates...
	I1027 19:42:07.821372  611121 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 19:42:07.822749  611121 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:42:07.824171  611121 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-352833/.minikube
	I1027 19:42:07.825729  611121 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 19:42:07.827437  611121 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 19:42:07.829438  611121 config.go:182] Loaded profile config "default-k8s-diff-port-813397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:42:07.829559  611121 config.go:182] Loaded profile config "kubernetes-upgrade-360986": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:42:07.829673  611121 config.go:182] Loaded profile config "no-preload-095885": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:42:07.829766  611121 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 19:42:07.855031  611121 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 19:42:07.855142  611121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:42:07.915010  611121 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-27 19:42:07.903974558 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:42:07.915130  611121 docker.go:318] overlay module found
	I1027 19:42:07.917097  611121 out.go:179] * Using the docker driver based on user configuration
	I1027 19:42:07.918388  611121 start.go:305] selected driver: docker
	I1027 19:42:07.918409  611121 start.go:925] validating driver "docker" against <nil>
	I1027 19:42:07.918426  611121 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 19:42:07.919108  611121 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:42:07.979974  611121 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-27 19:42:07.969677013 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:42:07.980199  611121 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1027 19:42:07.980230  611121 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1027 19:42:07.980557  611121 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1027 19:42:07.982785  611121 out.go:179] * Using Docker driver with root privileges
	I1027 19:42:07.984180  611121 cni.go:84] Creating CNI manager for ""
	I1027 19:42:07.984271  611121 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:42:07.984285  611121 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 19:42:07.984364  611121 start.go:349] cluster config:
	{Name:newest-cni-677710 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-677710 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:42:07.985596  611121 out.go:179] * Starting "newest-cni-677710" primary control-plane node in "newest-cni-677710" cluster
	I1027 19:42:07.986462  611121 cache.go:123] Beginning downloading kic base image for docker with crio
	I1027 19:42:07.987895  611121 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 19:42:07.989193  611121 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:42:07.989239  611121 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 19:42:07.989248  611121 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1027 19:42:07.989262  611121 cache.go:58] Caching tarball of preloaded images
	I1027 19:42:07.989394  611121 preload.go:233] Found /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1027 19:42:07.989411  611121 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 19:42:07.989543  611121 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/newest-cni-677710/config.json ...
	I1027 19:42:07.989571  611121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/newest-cni-677710/config.json: {Name:mke0a31bd491b4ea973b47072e29b7c5a4305b79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:42:08.011963  611121 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 19:42:08.011986  611121 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 19:42:08.012003  611121 cache.go:232] Successfully downloaded all kic artifacts
	I1027 19:42:08.012036  611121 start.go:360] acquireMachinesLock for newest-cni-677710: {Name:mkabafb366b06336ebd07468b9408fda45e62385 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:42:08.012155  611121 start.go:364] duration metric: took 100.054µs to acquireMachinesLock for "newest-cni-677710"
	I1027 19:42:08.012186  611121 start.go:93] Provisioning new machine with config: &{Name:newest-cni-677710 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-677710 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 19:42:08.012261  611121 start.go:125] createHost starting for "" (driver="docker")
	W1027 19:42:04.462262  604470 pod_ready.go:104] pod "coredns-66bc5c9577-gwqvg" is not "Ready", error: <nil>
	W1027 19:42:06.463721  604470 pod_ready.go:104] pod "coredns-66bc5c9577-gwqvg" is not "Ready", error: <nil>
	W1027 19:42:08.465275  604470 pod_ready.go:104] pod "coredns-66bc5c9577-gwqvg" is not "Ready", error: <nil>
	I1027 19:42:08.161222  565798 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1027 19:42:08.161707  565798 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1027 19:42:08.161775  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:42:08.161834  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:42:08.193990  565798 cri.go:89] found id: "ca67cda12e0adb415e229ae9e136a15743c92bb79ef8987bb33523c43775a99e"
	I1027 19:42:08.194013  565798 cri.go:89] found id: ""
	I1027 19:42:08.194022  565798 logs.go:282] 1 containers: [ca67cda12e0adb415e229ae9e136a15743c92bb79ef8987bb33523c43775a99e]
	I1027 19:42:08.194088  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:42:08.198625  565798 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:42:08.198706  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:42:08.233013  565798 cri.go:89] found id: ""
	I1027 19:42:08.233041  565798 logs.go:282] 0 containers: []
	W1027 19:42:08.233053  565798 logs.go:284] No container was found matching "etcd"
	I1027 19:42:08.233062  565798 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:42:08.233119  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:42:08.264367  565798 cri.go:89] found id: ""
	I1027 19:42:08.264398  565798 logs.go:282] 0 containers: []
	W1027 19:42:08.264411  565798 logs.go:284] No container was found matching "coredns"
	I1027 19:42:08.264419  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:42:08.264480  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:42:08.296914  565798 cri.go:89] found id: "15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:42:08.296938  565798 cri.go:89] found id: ""
	I1027 19:42:08.296947  565798 logs.go:282] 1 containers: [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8]
	I1027 19:42:08.297018  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:42:08.303511  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:42:08.303583  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:42:08.338204  565798 cri.go:89] found id: ""
	I1027 19:42:08.338235  565798 logs.go:282] 0 containers: []
	W1027 19:42:08.338247  565798 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:42:08.338255  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:42:08.338316  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:42:08.371077  565798 cri.go:89] found id: "4b0186426a494845ce9fa7af7755d0c2f9549f935b11a34bd738219dd3bfd4f5"
	I1027 19:42:08.371103  565798 cri.go:89] found id: ""
	I1027 19:42:08.371113  565798 logs.go:282] 1 containers: [4b0186426a494845ce9fa7af7755d0c2f9549f935b11a34bd738219dd3bfd4f5]
	I1027 19:42:08.371203  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:42:08.376089  565798 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:42:08.376195  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:42:08.409569  565798 cri.go:89] found id: ""
	I1027 19:42:08.409599  565798 logs.go:282] 0 containers: []
	W1027 19:42:08.409610  565798 logs.go:284] No container was found matching "kindnet"
	I1027 19:42:08.409617  565798 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:42:08.409668  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:42:08.441340  565798 cri.go:89] found id: ""
	I1027 19:42:08.441374  565798 logs.go:282] 0 containers: []
	W1027 19:42:08.441386  565798 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:42:08.441397  565798 logs.go:123] Gathering logs for container status ...
	I1027 19:42:08.441417  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:42:08.482685  565798 logs.go:123] Gathering logs for kubelet ...
	I1027 19:42:08.482719  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:42:08.593929  565798 logs.go:123] Gathering logs for dmesg ...
	I1027 19:42:08.593966  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:42:08.616246  565798 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:42:08.616290  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:42:08.683961  565798 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:42:08.683989  565798 logs.go:123] Gathering logs for kube-apiserver [ca67cda12e0adb415e229ae9e136a15743c92bb79ef8987bb33523c43775a99e] ...
	I1027 19:42:08.684005  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca67cda12e0adb415e229ae9e136a15743c92bb79ef8987bb33523c43775a99e"
	I1027 19:42:08.721240  565798 logs.go:123] Gathering logs for kube-scheduler [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8] ...
	I1027 19:42:08.721284  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:42:08.786486  565798 logs.go:123] Gathering logs for kube-controller-manager [4b0186426a494845ce9fa7af7755d0c2f9549f935b11a34bd738219dd3bfd4f5] ...
	I1027 19:42:08.786530  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4b0186426a494845ce9fa7af7755d0c2f9549f935b11a34bd738219dd3bfd4f5"
	I1027 19:42:08.817723  565798 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:42:08.817753  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:42:11.378832  565798 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1027 19:42:11.379368  565798 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1027 19:42:11.379458  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:42:11.379537  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:42:11.410968  565798 cri.go:89] found id: "ca67cda12e0adb415e229ae9e136a15743c92bb79ef8987bb33523c43775a99e"
	I1027 19:42:11.410992  565798 cri.go:89] found id: ""
	I1027 19:42:11.411002  565798 logs.go:282] 1 containers: [ca67cda12e0adb415e229ae9e136a15743c92bb79ef8987bb33523c43775a99e]
	I1027 19:42:11.411061  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:42:11.415452  565798 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:42:11.415535  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:42:11.449095  565798 cri.go:89] found id: ""
	I1027 19:42:11.449125  565798 logs.go:282] 0 containers: []
	W1027 19:42:11.449149  565798 logs.go:284] No container was found matching "etcd"
	I1027 19:42:11.449157  565798 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:42:11.449216  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:42:11.480669  565798 cri.go:89] found id: ""
	I1027 19:42:11.480696  565798 logs.go:282] 0 containers: []
	W1027 19:42:11.480706  565798 logs.go:284] No container was found matching "coredns"
	I1027 19:42:11.480712  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:42:11.480774  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:42:11.513844  565798 cri.go:89] found id: "15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:42:11.513872  565798 cri.go:89] found id: ""
	I1027 19:42:11.513885  565798 logs.go:282] 1 containers: [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8]
	I1027 19:42:11.513966  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:42:11.518228  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:42:11.518300  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:42:08.014435  611121 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1027 19:42:08.014654  611121 start.go:159] libmachine.API.Create for "newest-cni-677710" (driver="docker")
	I1027 19:42:08.014685  611121 client.go:168] LocalClient.Create starting
	I1027 19:42:08.014756  611121 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem
	I1027 19:42:08.014790  611121 main.go:141] libmachine: Decoding PEM data...
	I1027 19:42:08.014807  611121 main.go:141] libmachine: Parsing certificate...
	I1027 19:42:08.014863  611121 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem
	I1027 19:42:08.014884  611121 main.go:141] libmachine: Decoding PEM data...
	I1027 19:42:08.014891  611121 main.go:141] libmachine: Parsing certificate...
	I1027 19:42:08.015260  611121 cli_runner.go:164] Run: docker network inspect newest-cni-677710 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 19:42:08.034268  611121 cli_runner.go:211] docker network inspect newest-cni-677710 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 19:42:08.034340  611121 network_create.go:284] running [docker network inspect newest-cni-677710] to gather additional debugging logs...
	I1027 19:42:08.034360  611121 cli_runner.go:164] Run: docker network inspect newest-cni-677710
	W1027 19:42:08.052498  611121 cli_runner.go:211] docker network inspect newest-cni-677710 returned with exit code 1
	I1027 19:42:08.052529  611121 network_create.go:287] error running [docker network inspect newest-cni-677710]: docker network inspect newest-cni-677710: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-677710 not found
	I1027 19:42:08.052556  611121 network_create.go:289] output of [docker network inspect newest-cni-677710]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-677710 not found
	
	** /stderr **
	I1027 19:42:08.052670  611121 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 19:42:08.072607  611121 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-04e197bde7e8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:8c:cb:7c:68:31} reservation:<nil>}
	I1027 19:42:08.073420  611121 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e37fd2b092bc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:26:98:e3:c0:d9:8a} reservation:<nil>}
	I1027 19:42:08.073893  611121 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-bbd9ae70d20d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ea:7f:4f:eb:e4:a1} reservation:<nil>}
	I1027 19:42:08.074598  611121 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-0e1134f19412 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:3e:a7:be:0f:1e:4e} reservation:<nil>}
	I1027 19:42:08.075330  611121 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-e5c60f1f40ae IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:6a:1e:24:48:2b:2f} reservation:<nil>}
	I1027 19:42:08.076248  611121 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f3b4f0}
	I1027 19:42:08.076275  611121 network_create.go:124] attempt to create docker network newest-cni-677710 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1027 19:42:08.076353  611121 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-677710 newest-cni-677710
	I1027 19:42:08.140992  611121 network_create.go:108] docker network newest-cni-677710 192.168.94.0/24 created
	I1027 19:42:08.141026  611121 kic.go:121] calculated static IP "192.168.94.2" for the "newest-cni-677710" container
	I1027 19:42:08.141162  611121 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 19:42:08.161375  611121 cli_runner.go:164] Run: docker volume create newest-cni-677710 --label name.minikube.sigs.k8s.io=newest-cni-677710 --label created_by.minikube.sigs.k8s.io=true
	I1027 19:42:08.183421  611121 oci.go:103] Successfully created a docker volume newest-cni-677710
	I1027 19:42:08.183530  611121 cli_runner.go:164] Run: docker run --rm --name newest-cni-677710-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-677710 --entrypoint /usr/bin/test -v newest-cni-677710:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 19:42:08.610681  611121 oci.go:107] Successfully prepared a docker volume newest-cni-677710
	I1027 19:42:08.610737  611121 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:42:08.610763  611121 kic.go:194] Starting extracting preloaded images to volume ...
	I1027 19:42:08.610843  611121 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-677710:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Oct 27 19:42:00 default-k8s-diff-port-813397 crio[778]: time="2025-10-27T19:42:00.848128947Z" level=info msg="Starting container: 10ab5635469a89812cad5c4291881944330ce3e0cc623b9c7d83851a89a0898c" id=9490c3ae-9622-4eed-b830-bbbfdab54a17 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:42:00 default-k8s-diff-port-813397 crio[778]: time="2025-10-27T19:42:00.850119582Z" level=info msg="Started container" PID=1845 containerID=10ab5635469a89812cad5c4291881944330ce3e0cc623b9c7d83851a89a0898c description=kube-system/coredns-66bc5c9577-d2trp/coredns id=9490c3ae-9622-4eed-b830-bbbfdab54a17 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3b5fbedb9d2fed30eaeb82669bace7aa188bc815b70cb6e499aa6406c0a9783c
	Oct 27 19:42:03 default-k8s-diff-port-813397 crio[778]: time="2025-10-27T19:42:03.532428136Z" level=info msg="Running pod sandbox: default/busybox/POD" id=ef7310ab-c924-43d6-b168-e0059656e805 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 19:42:03 default-k8s-diff-port-813397 crio[778]: time="2025-10-27T19:42:03.532517867Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:42:03 default-k8s-diff-port-813397 crio[778]: time="2025-10-27T19:42:03.537901122Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:168e51b295c09d5f7230505e93241f0be92925cbfd1129f7fa70d32285450027 UID:9332b27c-18d7-4f62-aa20-359e62f7d9b4 NetNS:/var/run/netns/515a343c-6799-4b16-bf89-ecb9fc16212a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0006944b0}] Aliases:map[]}"
	Oct 27 19:42:03 default-k8s-diff-port-813397 crio[778]: time="2025-10-27T19:42:03.537951232Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 27 19:42:03 default-k8s-diff-port-813397 crio[778]: time="2025-10-27T19:42:03.549895789Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:168e51b295c09d5f7230505e93241f0be92925cbfd1129f7fa70d32285450027 UID:9332b27c-18d7-4f62-aa20-359e62f7d9b4 NetNS:/var/run/netns/515a343c-6799-4b16-bf89-ecb9fc16212a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0006944b0}] Aliases:map[]}"
	Oct 27 19:42:03 default-k8s-diff-port-813397 crio[778]: time="2025-10-27T19:42:03.550083968Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 27 19:42:03 default-k8s-diff-port-813397 crio[778]: time="2025-10-27T19:42:03.551289498Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 27 19:42:03 default-k8s-diff-port-813397 crio[778]: time="2025-10-27T19:42:03.552490873Z" level=info msg="Ran pod sandbox 168e51b295c09d5f7230505e93241f0be92925cbfd1129f7fa70d32285450027 with infra container: default/busybox/POD" id=ef7310ab-c924-43d6-b168-e0059656e805 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 19:42:03 default-k8s-diff-port-813397 crio[778]: time="2025-10-27T19:42:03.553894091Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0ba410d1-6ac2-4813-a969-027f62ab9ef1 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:42:03 default-k8s-diff-port-813397 crio[778]: time="2025-10-27T19:42:03.554077614Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=0ba410d1-6ac2-4813-a969-027f62ab9ef1 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:42:03 default-k8s-diff-port-813397 crio[778]: time="2025-10-27T19:42:03.554114244Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=0ba410d1-6ac2-4813-a969-027f62ab9ef1 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:42:03 default-k8s-diff-port-813397 crio[778]: time="2025-10-27T19:42:03.554958732Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a8aba65b-fa6b-41f0-8b62-95b8508744b1 name=/runtime.v1.ImageService/PullImage
	Oct 27 19:42:03 default-k8s-diff-port-813397 crio[778]: time="2025-10-27T19:42:03.559365324Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 27 19:42:04 default-k8s-diff-port-813397 crio[778]: time="2025-10-27T19:42:04.345000419Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=a8aba65b-fa6b-41f0-8b62-95b8508744b1 name=/runtime.v1.ImageService/PullImage
	Oct 27 19:42:04 default-k8s-diff-port-813397 crio[778]: time="2025-10-27T19:42:04.345892762Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=141d685d-7e3e-4cbc-ab10-345e06c7a178 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:42:04 default-k8s-diff-port-813397 crio[778]: time="2025-10-27T19:42:04.34756987Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=38a59837-4107-4e25-8a7b-1914de89c3b6 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:42:04 default-k8s-diff-port-813397 crio[778]: time="2025-10-27T19:42:04.351208725Z" level=info msg="Creating container: default/busybox/busybox" id=89acb933-4ac8-4939-a847-f44e262a0770 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:42:04 default-k8s-diff-port-813397 crio[778]: time="2025-10-27T19:42:04.351368836Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:42:04 default-k8s-diff-port-813397 crio[778]: time="2025-10-27T19:42:04.355564305Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:42:04 default-k8s-diff-port-813397 crio[778]: time="2025-10-27T19:42:04.356196924Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:42:04 default-k8s-diff-port-813397 crio[778]: time="2025-10-27T19:42:04.382971405Z" level=info msg="Created container 7bebdec0f88f8ab98cf957fef71d9b17eb35da8bcf9f422485b781aeb8b402ce: default/busybox/busybox" id=89acb933-4ac8-4939-a847-f44e262a0770 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:42:04 default-k8s-diff-port-813397 crio[778]: time="2025-10-27T19:42:04.383806078Z" level=info msg="Starting container: 7bebdec0f88f8ab98cf957fef71d9b17eb35da8bcf9f422485b781aeb8b402ce" id=5535e9d1-df06-4edb-ae1c-4e8708dc63d1 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:42:04 default-k8s-diff-port-813397 crio[778]: time="2025-10-27T19:42:04.386113768Z" level=info msg="Started container" PID=1921 containerID=7bebdec0f88f8ab98cf957fef71d9b17eb35da8bcf9f422485b781aeb8b402ce description=default/busybox/busybox id=5535e9d1-df06-4edb-ae1c-4e8708dc63d1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=168e51b295c09d5f7230505e93241f0be92925cbfd1129f7fa70d32285450027
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	7bebdec0f88f8       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   9 seconds ago       Running             busybox                   0                   168e51b295c09       busybox                                                default
	10ab5635469a8       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   3b5fbedb9d2fe       coredns-66bc5c9577-d2trp                               kube-system
	bb705074462de       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   b0afaa8151222       storage-provisioner                                    kube-system
	087dc9e1baca5       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   8bf10882bdcbd       kindnet-hhddd                                          kube-system
	805b129c066eb       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      23 seconds ago      Running             kube-proxy                0                   c6e68e2f1c6a8       kube-proxy-bldc8                                       kube-system
	0990439aa7f3a       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      34 seconds ago      Running             kube-controller-manager   0                   3afd4fc31c690       kube-controller-manager-default-k8s-diff-port-813397   kube-system
	9a49f7a7b93d7       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      34 seconds ago      Running             kube-scheduler            0                   3d548ffbf6471       kube-scheduler-default-k8s-diff-port-813397            kube-system
	f97ca4acece05       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      34 seconds ago      Running             etcd                      0                   a4246e5f787dd       etcd-default-k8s-diff-port-813397                      kube-system
	97f8d81ed57e9       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      34 seconds ago      Running             kube-apiserver            0                   98836a53c79c4       kube-apiserver-default-k8s-diff-port-813397            kube-system
	
	
	==> coredns [10ab5635469a89812cad5c4291881944330ce3e0cc623b9c7d83851a89a0898c] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46230 - 62163 "HINFO IN 495224254868095790.1320653764496283487. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.894168579s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-813397
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-813397
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=default-k8s-diff-port-813397
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T19_41_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 19:41:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-813397
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 19:42:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 19:42:00 +0000   Mon, 27 Oct 2025 19:41:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 19:42:00 +0000   Mon, 27 Oct 2025 19:41:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 19:42:00 +0000   Mon, 27 Oct 2025 19:41:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 19:42:00 +0000   Mon, 27 Oct 2025 19:42:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-813397
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                7fbc9f19-9330-4688-94ac-b272ce8c2683
	  Boot ID:                    811bd29c-e64e-4acc-9427-bab1f7caed93
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-d2trp                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-default-k8s-diff-port-813397                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-hhddd                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-default-k8s-diff-port-813397             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-813397    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-bldc8                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-default-k8s-diff-port-813397             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 29s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s   kubelet          Node default-k8s-diff-port-813397 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s   kubelet          Node default-k8s-diff-port-813397 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s   kubelet          Node default-k8s-diff-port-813397 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node default-k8s-diff-port-813397 event: Registered Node default-k8s-diff-port-813397 in Controller
	  Normal  NodeReady                13s   kubelet          Node default-k8s-diff-port-813397 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 23 52 43 9a ba 08 06
	[  +0.000398] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 50 95 0e df 53 08 06
	[Oct27 18:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.017295] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023893] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023882] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023851] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +2.047849] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +4.031592] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +8.319143] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[ +16.382183] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[Oct27 19:00] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	
	
	==> etcd [f97ca4acece053bd6577f78832ede8f8f652a69c519ce6cd58e04864f3f38af0] <==
	{"level":"warn","ts":"2025-10-27T19:41:40.467453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:40.476808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:40.485730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:40.494986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:40.503729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:40.513927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:40.525061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:40.535919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:40.556876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:40.573734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:40.589699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60958","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:40.598220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:40.604730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60992","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:40.614562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:40.624237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:40.633218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:40.643245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:40.651536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:40.660743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:40.668058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:40.682764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:40.691569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:40.699988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:40.763221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32948","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-27T19:41:53.318743Z","caller":"traceutil/trace.go:172","msg":"trace[571711778] transaction","detail":"{read_only:false; response_revision:383; number_of_response:1; }","duration":"119.660647ms","start":"2025-10-27T19:41:53.199058Z","end":"2025-10-27T19:41:53.318718Z","steps":["trace[571711778] 'process raft request'  (duration: 119.481528ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:42:13 up  2:24,  0 user,  load average: 4.60, 3.48, 2.22
	Linux default-k8s-diff-port-813397 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [087dc9e1baca5b8043eaa37dfdce05e210900f8e966aacb6e55e146c76193d0d] <==
	I1027 19:41:50.039729       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 19:41:50.039995       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1027 19:41:50.040172       1 main.go:148] setting mtu 1500 for CNI 
	I1027 19:41:50.040192       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 19:41:50.040220       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T19:41:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 19:41:50.245035       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 19:41:50.245102       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 19:41:50.245128       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 19:41:50.245312       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1027 19:41:50.835111       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 19:41:50.835182       1 metrics.go:72] Registering metrics
	I1027 19:41:50.835286       1 controller.go:711] "Syncing nftables rules"
	I1027 19:42:00.246302       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 19:42:00.246412       1 main.go:301] handling current node
	I1027 19:42:10.246030       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 19:42:10.246064       1 main.go:301] handling current node
	
	
	==> kube-apiserver [97f8d81ed57e9b42ff9c0232240c4dfa703fbd1641dced1fc62c90106ffe6dee] <==
	I1027 19:41:41.581129       1 policy_source.go:240] refreshing policies
	I1027 19:41:41.583852       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 19:41:41.626480       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 19:41:41.675877       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1027 19:41:41.676193       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 19:41:41.693427       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 19:41:41.693603       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 19:41:42.475589       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1027 19:41:42.487691       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1027 19:41:42.487720       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 19:41:43.198330       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 19:41:43.270692       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 19:41:43.378346       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1027 19:41:43.386358       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1027 19:41:43.387955       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 19:41:43.393434       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 19:41:43.603281       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 19:41:44.263816       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 19:41:44.277354       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1027 19:41:44.288297       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1027 19:41:49.456288       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1027 19:41:49.561014       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 19:41:49.567187       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 19:41:49.608411       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1027 19:42:11.323008       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:39328: use of closed network connection
	
	
	==> kube-controller-manager [0990439aa7f3a72715bf26dcb02d7ee93c2250fb360cc2674de066895aa5e28d] <==
	I1027 19:41:48.602033       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 19:41:48.602086       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 19:41:48.602142       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 19:41:48.602228       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 19:41:48.602291       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 19:41:48.602368       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 19:41:48.602513       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-813397"
	I1027 19:41:48.602563       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 19:41:48.602562       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1027 19:41:48.603575       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 19:41:48.606574       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1027 19:41:48.606629       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1027 19:41:48.606681       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1027 19:41:48.606693       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1027 19:41:48.606700       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1027 19:41:48.606709       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1027 19:41:48.606685       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 19:41:48.607061       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 19:41:48.607113       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1027 19:41:48.608540       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 19:41:48.609649       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1027 19:41:48.613660       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-813397" podCIDRs=["10.244.0.0/24"]
	I1027 19:41:48.620811       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 19:41:48.624057       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 19:42:03.604865       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [805b129c066ebc5755c6121a61d342f90e64a682b76c10d04aca7deef96d9f05] <==
	I1027 19:41:49.906986       1 server_linux.go:53] "Using iptables proxy"
	I1027 19:41:49.984189       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 19:41:50.084705       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 19:41:50.084824       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1027 19:41:50.084928       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 19:41:50.105333       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 19:41:50.105390       1 server_linux.go:132] "Using iptables Proxier"
	I1027 19:41:50.111164       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 19:41:50.111863       1 server.go:527] "Version info" version="v1.34.1"
	I1027 19:41:50.111913       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:41:50.114018       1 config.go:106] "Starting endpoint slice config controller"
	I1027 19:41:50.114045       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 19:41:50.114091       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 19:41:50.114110       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 19:41:50.114196       1 config.go:309] "Starting node config controller"
	I1027 19:41:50.114207       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 19:41:50.114215       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 19:41:50.114219       1 config.go:200] "Starting service config controller"
	I1027 19:41:50.114226       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 19:41:50.216890       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 19:41:50.216929       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 19:41:50.216950       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [9a49f7a7b93d717785127a8ae18addf5ea226f5e94c89e2b466aedbe107fc2ca] <==
	E1027 19:41:41.539053       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 19:41:41.539397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 19:41:41.539101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 19:41:41.539441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 19:41:41.539513       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 19:41:41.538509       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1027 19:41:41.539875       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 19:41:41.539889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 19:41:41.539986       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 19:41:41.539996       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 19:41:41.540089       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 19:41:42.351862       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 19:41:42.354446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 19:41:42.359103       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 19:41:42.372560       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 19:41:42.386390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 19:41:42.423083       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1027 19:41:42.437964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 19:41:42.517558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 19:41:42.544800       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 19:41:42.771151       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1027 19:41:42.844933       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 19:41:42.875558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 19:41:42.887808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	I1027 19:41:44.334529       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 19:41:45 default-k8s-diff-port-813397 kubelet[1327]: E1027 19:41:45.177046    1327 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-default-k8s-diff-port-813397\" already exists" pod="kube-system/etcd-default-k8s-diff-port-813397"
	Oct 27 19:41:45 default-k8s-diff-port-813397 kubelet[1327]: I1027 19:41:45.197539    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-813397" podStartSLOduration=1.197514561 podStartE2EDuration="1.197514561s" podCreationTimestamp="2025-10-27 19:41:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:41:45.187675786 +0000 UTC m=+1.137624929" watchObservedRunningTime="2025-10-27 19:41:45.197514561 +0000 UTC m=+1.147463689"
	Oct 27 19:41:45 default-k8s-diff-port-813397 kubelet[1327]: I1027 19:41:45.209717    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-813397" podStartSLOduration=1.209679094 podStartE2EDuration="1.209679094s" podCreationTimestamp="2025-10-27 19:41:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:41:45.197746084 +0000 UTC m=+1.147695218" watchObservedRunningTime="2025-10-27 19:41:45.209679094 +0000 UTC m=+1.159628227"
	Oct 27 19:41:45 default-k8s-diff-port-813397 kubelet[1327]: I1027 19:41:45.224847    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-813397" podStartSLOduration=1.224820909 podStartE2EDuration="1.224820909s" podCreationTimestamp="2025-10-27 19:41:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:41:45.209935803 +0000 UTC m=+1.159884928" watchObservedRunningTime="2025-10-27 19:41:45.224820909 +0000 UTC m=+1.174770040"
	Oct 27 19:41:45 default-k8s-diff-port-813397 kubelet[1327]: I1027 19:41:45.225009    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-813397" podStartSLOduration=1.224998496 podStartE2EDuration="1.224998496s" podCreationTimestamp="2025-10-27 19:41:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:41:45.224952504 +0000 UTC m=+1.174901637" watchObservedRunningTime="2025-10-27 19:41:45.224998496 +0000 UTC m=+1.174947629"
	Oct 27 19:41:48 default-k8s-diff-port-813397 kubelet[1327]: I1027 19:41:48.686248    1327 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 27 19:41:48 default-k8s-diff-port-813397 kubelet[1327]: I1027 19:41:48.687022    1327 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 27 19:41:49 default-k8s-diff-port-813397 kubelet[1327]: I1027 19:41:49.555916    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1c4e40c1-8157-41f3-9ff0-7c2dcfa3f154-xtables-lock\") pod \"kindnet-hhddd\" (UID: \"1c4e40c1-8157-41f3-9ff0-7c2dcfa3f154\") " pod="kube-system/kindnet-hhddd"
	Oct 27 19:41:49 default-k8s-diff-port-813397 kubelet[1327]: I1027 19:41:49.555968    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c4e40c1-8157-41f3-9ff0-7c2dcfa3f154-lib-modules\") pod \"kindnet-hhddd\" (UID: \"1c4e40c1-8157-41f3-9ff0-7c2dcfa3f154\") " pod="kube-system/kindnet-hhddd"
	Oct 27 19:41:49 default-k8s-diff-port-813397 kubelet[1327]: I1027 19:41:49.556002    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hswsb\" (UniqueName: \"kubernetes.io/projected/1c4e40c1-8157-41f3-9ff0-7c2dcfa3f154-kube-api-access-hswsb\") pod \"kindnet-hhddd\" (UID: \"1c4e40c1-8157-41f3-9ff0-7c2dcfa3f154\") " pod="kube-system/kindnet-hhddd"
	Oct 27 19:41:49 default-k8s-diff-port-813397 kubelet[1327]: I1027 19:41:49.556026    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ed0e06ee-d1dd-4efb-8ec1-979cc70b7b23-kube-proxy\") pod \"kube-proxy-bldc8\" (UID: \"ed0e06ee-d1dd-4efb-8ec1-979cc70b7b23\") " pod="kube-system/kube-proxy-bldc8"
	Oct 27 19:41:49 default-k8s-diff-port-813397 kubelet[1327]: I1027 19:41:49.556050    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1c4e40c1-8157-41f3-9ff0-7c2dcfa3f154-cni-cfg\") pod \"kindnet-hhddd\" (UID: \"1c4e40c1-8157-41f3-9ff0-7c2dcfa3f154\") " pod="kube-system/kindnet-hhddd"
	Oct 27 19:41:49 default-k8s-diff-port-813397 kubelet[1327]: I1027 19:41:49.556071    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed0e06ee-d1dd-4efb-8ec1-979cc70b7b23-xtables-lock\") pod \"kube-proxy-bldc8\" (UID: \"ed0e06ee-d1dd-4efb-8ec1-979cc70b7b23\") " pod="kube-system/kube-proxy-bldc8"
	Oct 27 19:41:49 default-k8s-diff-port-813397 kubelet[1327]: I1027 19:41:49.556094    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed0e06ee-d1dd-4efb-8ec1-979cc70b7b23-lib-modules\") pod \"kube-proxy-bldc8\" (UID: \"ed0e06ee-d1dd-4efb-8ec1-979cc70b7b23\") " pod="kube-system/kube-proxy-bldc8"
	Oct 27 19:41:49 default-k8s-diff-port-813397 kubelet[1327]: I1027 19:41:49.556120    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6rcv\" (UniqueName: \"kubernetes.io/projected/ed0e06ee-d1dd-4efb-8ec1-979cc70b7b23-kube-api-access-k6rcv\") pod \"kube-proxy-bldc8\" (UID: \"ed0e06ee-d1dd-4efb-8ec1-979cc70b7b23\") " pod="kube-system/kube-proxy-bldc8"
	Oct 27 19:41:50 default-k8s-diff-port-813397 kubelet[1327]: I1027 19:41:50.193054    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bldc8" podStartSLOduration=1.193029573 podStartE2EDuration="1.193029573s" podCreationTimestamp="2025-10-27 19:41:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:41:50.192959369 +0000 UTC m=+6.142908502" watchObservedRunningTime="2025-10-27 19:41:50.193029573 +0000 UTC m=+6.142978705"
	Oct 27 19:41:50 default-k8s-diff-port-813397 kubelet[1327]: I1027 19:41:50.219468    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-hhddd" podStartSLOduration=1.219444146 podStartE2EDuration="1.219444146s" podCreationTimestamp="2025-10-27 19:41:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:41:50.219281837 +0000 UTC m=+6.169230974" watchObservedRunningTime="2025-10-27 19:41:50.219444146 +0000 UTC m=+6.169393278"
	Oct 27 19:42:00 default-k8s-diff-port-813397 kubelet[1327]: I1027 19:42:00.458176    1327 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 27 19:42:00 default-k8s-diff-port-813397 kubelet[1327]: I1027 19:42:00.538081    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhbns\" (UniqueName: \"kubernetes.io/projected/9e91fe3a-fd72-4ccb-b553-e13874944e3b-kube-api-access-zhbns\") pod \"storage-provisioner\" (UID: \"9e91fe3a-fd72-4ccb-b553-e13874944e3b\") " pod="kube-system/storage-provisioner"
	Oct 27 19:42:00 default-k8s-diff-port-813397 kubelet[1327]: I1027 19:42:00.538199    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5445ece0-9eae-47b4-8082-3f79d585e065-config-volume\") pod \"coredns-66bc5c9577-d2trp\" (UID: \"5445ece0-9eae-47b4-8082-3f79d585e065\") " pod="kube-system/coredns-66bc5c9577-d2trp"
	Oct 27 19:42:00 default-k8s-diff-port-813397 kubelet[1327]: I1027 19:42:00.538234    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9e91fe3a-fd72-4ccb-b553-e13874944e3b-tmp\") pod \"storage-provisioner\" (UID: \"9e91fe3a-fd72-4ccb-b553-e13874944e3b\") " pod="kube-system/storage-provisioner"
	Oct 27 19:42:00 default-k8s-diff-port-813397 kubelet[1327]: I1027 19:42:00.538262    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9w4j\" (UniqueName: \"kubernetes.io/projected/5445ece0-9eae-47b4-8082-3f79d585e065-kube-api-access-d9w4j\") pod \"coredns-66bc5c9577-d2trp\" (UID: \"5445ece0-9eae-47b4-8082-3f79d585e065\") " pod="kube-system/coredns-66bc5c9577-d2trp"
	Oct 27 19:42:01 default-k8s-diff-port-813397 kubelet[1327]: I1027 19:42:01.245077    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.245046346 podStartE2EDuration="12.245046346s" podCreationTimestamp="2025-10-27 19:41:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:42:01.231786326 +0000 UTC m=+17.181735470" watchObservedRunningTime="2025-10-27 19:42:01.245046346 +0000 UTC m=+17.194995479"
	Oct 27 19:42:03 default-k8s-diff-port-813397 kubelet[1327]: I1027 19:42:03.225932    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-d2trp" podStartSLOduration=14.225902915 podStartE2EDuration="14.225902915s" podCreationTimestamp="2025-10-27 19:41:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:42:01.245502175 +0000 UTC m=+17.195451305" watchObservedRunningTime="2025-10-27 19:42:03.225902915 +0000 UTC m=+19.175852048"
	Oct 27 19:42:03 default-k8s-diff-port-813397 kubelet[1327]: I1027 19:42:03.355585    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt4kb\" (UniqueName: \"kubernetes.io/projected/9332b27c-18d7-4f62-aa20-359e62f7d9b4-kube-api-access-jt4kb\") pod \"busybox\" (UID: \"9332b27c-18d7-4f62-aa20-359e62f7d9b4\") " pod="default/busybox"
	
	
	==> storage-provisioner [bb705074462deefcdc51774e6efd124d25f97ecb116e3a3395b7511d90778284] <==
	I1027 19:42:00.862689       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1027 19:42:00.872476       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 19:42:00.872565       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1027 19:42:00.876593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:42:00.883997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 19:42:00.884249       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 19:42:00.884364       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cbed91f6-01d4-484d-a71d-80aad634d779", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-813397_da54d8ce-1692-4cbb-b07e-513c008c96b4 became leader
	I1027 19:42:00.884460       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-813397_da54d8ce-1692-4cbb-b07e-513c008c96b4!
	W1027 19:42:00.888032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:42:00.896577       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 19:42:00.984777       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-813397_da54d8ce-1692-4cbb-b07e-513c008c96b4!
	W1027 19:42:02.901033       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:42:02.906699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:42:04.910183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:42:04.915121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:42:06.918982       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:42:06.924002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:42:08.928565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:42:08.933279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:42:10.936931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:42:11.003989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:42:13.007637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:42:13.058534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-813397 -n default-k8s-diff-port-813397
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-813397 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-095885 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-095885 --alsologtostderr -v=1: exit status 80 (1.92469331s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-095885 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:42:34.299801  617247 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:42:34.299959  617247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:42:34.299976  617247 out.go:374] Setting ErrFile to fd 2...
	I1027 19:42:34.299982  617247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:42:34.300400  617247 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 19:42:34.300772  617247 out.go:368] Setting JSON to false
	I1027 19:42:34.300833  617247 mustload.go:65] Loading cluster: no-preload-095885
	I1027 19:42:34.301398  617247 config.go:182] Loaded profile config "no-preload-095885": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:42:34.301827  617247 cli_runner.go:164] Run: docker container inspect no-preload-095885 --format={{.State.Status}}
	I1027 19:42:34.325093  617247 host.go:66] Checking if "no-preload-095885" exists ...
	I1027 19:42:34.325512  617247 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:42:34.403113  617247 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:88 SystemTime:2025-10-27 19:42:34.390472154 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:42:34.403927  617247 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-095885 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1027 19:42:34.405855  617247 out.go:179] * Pausing node no-preload-095885 ... 
	I1027 19:42:34.407831  617247 host.go:66] Checking if "no-preload-095885" exists ...
	I1027 19:42:34.408397  617247 ssh_runner.go:195] Run: systemctl --version
	I1027 19:42:34.408469  617247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-095885
	I1027 19:42:34.435465  617247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33455 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/no-preload-095885/id_rsa Username:docker}
	I1027 19:42:34.544068  617247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:42:34.570704  617247 pause.go:52] kubelet running: true
	I1027 19:42:34.570871  617247 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 19:42:34.768680  617247 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 19:42:34.768803  617247 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 19:42:34.844468  617247 cri.go:89] found id: "90f66b7e123c368c03fba5eba51565bbc9522c44deaa2e2decbf48428f0a1e87"
	I1027 19:42:34.844496  617247 cri.go:89] found id: "bdd012f57e645223267c73f71de660efe4e4214e579bda4ce609049f9287d78b"
	I1027 19:42:34.844502  617247 cri.go:89] found id: "dfd3413fa181f285a1eacee389efc4b492f13e6936b46ef5bb030474a125d597"
	I1027 19:42:34.844506  617247 cri.go:89] found id: "44fc145d6f9918f3db309fd6e1b253a09d9c17767f2425460e6e412e11200fcf"
	I1027 19:42:34.844511  617247 cri.go:89] found id: "5697b5794786ef7e3e2b6adc476b065be3213886077b1efb7ec8a11a1893a554"
	I1027 19:42:34.844516  617247 cri.go:89] found id: "5cea35874d5acf206b55e45b05f38d78ea9509d27b883c670c280fce93719392"
	I1027 19:42:34.844526  617247 cri.go:89] found id: "6027c707b2e6435987becfbc61cef802217623f703bccb12bb5716bc98c873a9"
	I1027 19:42:34.844531  617247 cri.go:89] found id: "b35fe833b6d5250c5b516a89c49b8f3808e23967fa3f1a0150b2cd20ac6d55ea"
	I1027 19:42:34.844535  617247 cri.go:89] found id: "781c3a34fe9cc4350ebd3342ca9b66e12ce9f3e6795ee22c7d4ed1e31f9fcd7c"
	I1027 19:42:34.844551  617247 cri.go:89] found id: "1784845152b15e895d191df6003d8e4505e0deb1eb12dca53fdb508d01a0c382"
	I1027 19:42:34.844555  617247 cri.go:89] found id: "f095013b1fea34f4a0e54b3bc41fce7b3914256c3abf5dbba3bc51f30acfb4d3"
	I1027 19:42:34.844558  617247 cri.go:89] found id: ""
	I1027 19:42:34.844607  617247 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:42:34.861933  617247 retry.go:31] will retry after 289.430716ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:42:34Z" level=error msg="open /run/runc: no such file or directory"
	I1027 19:42:35.152342  617247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:42:35.167689  617247 pause.go:52] kubelet running: false
	I1027 19:42:35.167754  617247 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 19:42:35.344427  617247 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 19:42:35.344542  617247 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 19:42:35.417054  617247 cri.go:89] found id: "90f66b7e123c368c03fba5eba51565bbc9522c44deaa2e2decbf48428f0a1e87"
	I1027 19:42:35.417078  617247 cri.go:89] found id: "bdd012f57e645223267c73f71de660efe4e4214e579bda4ce609049f9287d78b"
	I1027 19:42:35.417081  617247 cri.go:89] found id: "dfd3413fa181f285a1eacee389efc4b492f13e6936b46ef5bb030474a125d597"
	I1027 19:42:35.417085  617247 cri.go:89] found id: "44fc145d6f9918f3db309fd6e1b253a09d9c17767f2425460e6e412e11200fcf"
	I1027 19:42:35.417088  617247 cri.go:89] found id: "5697b5794786ef7e3e2b6adc476b065be3213886077b1efb7ec8a11a1893a554"
	I1027 19:42:35.417091  617247 cri.go:89] found id: "5cea35874d5acf206b55e45b05f38d78ea9509d27b883c670c280fce93719392"
	I1027 19:42:35.417094  617247 cri.go:89] found id: "6027c707b2e6435987becfbc61cef802217623f703bccb12bb5716bc98c873a9"
	I1027 19:42:35.417096  617247 cri.go:89] found id: "b35fe833b6d5250c5b516a89c49b8f3808e23967fa3f1a0150b2cd20ac6d55ea"
	I1027 19:42:35.417098  617247 cri.go:89] found id: "781c3a34fe9cc4350ebd3342ca9b66e12ce9f3e6795ee22c7d4ed1e31f9fcd7c"
	I1027 19:42:35.417104  617247 cri.go:89] found id: "1784845152b15e895d191df6003d8e4505e0deb1eb12dca53fdb508d01a0c382"
	I1027 19:42:35.417107  617247 cri.go:89] found id: "f095013b1fea34f4a0e54b3bc41fce7b3914256c3abf5dbba3bc51f30acfb4d3"
	I1027 19:42:35.417110  617247 cri.go:89] found id: ""
	I1027 19:42:35.417173  617247 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:42:35.429809  617247 retry.go:31] will retry after 421.901167ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:42:35Z" level=error msg="open /run/runc: no such file or directory"
	I1027 19:42:35.852341  617247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:42:35.867157  617247 pause.go:52] kubelet running: false
	I1027 19:42:35.867236  617247 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 19:42:36.027842  617247 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 19:42:36.027922  617247 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 19:42:36.103959  617247 cri.go:89] found id: "90f66b7e123c368c03fba5eba51565bbc9522c44deaa2e2decbf48428f0a1e87"
	I1027 19:42:36.103986  617247 cri.go:89] found id: "bdd012f57e645223267c73f71de660efe4e4214e579bda4ce609049f9287d78b"
	I1027 19:42:36.103991  617247 cri.go:89] found id: "dfd3413fa181f285a1eacee389efc4b492f13e6936b46ef5bb030474a125d597"
	I1027 19:42:36.103998  617247 cri.go:89] found id: "44fc145d6f9918f3db309fd6e1b253a09d9c17767f2425460e6e412e11200fcf"
	I1027 19:42:36.104002  617247 cri.go:89] found id: "5697b5794786ef7e3e2b6adc476b065be3213886077b1efb7ec8a11a1893a554"
	I1027 19:42:36.104007  617247 cri.go:89] found id: "5cea35874d5acf206b55e45b05f38d78ea9509d27b883c670c280fce93719392"
	I1027 19:42:36.104011  617247 cri.go:89] found id: "6027c707b2e6435987becfbc61cef802217623f703bccb12bb5716bc98c873a9"
	I1027 19:42:36.104015  617247 cri.go:89] found id: "b35fe833b6d5250c5b516a89c49b8f3808e23967fa3f1a0150b2cd20ac6d55ea"
	I1027 19:42:36.104019  617247 cri.go:89] found id: "781c3a34fe9cc4350ebd3342ca9b66e12ce9f3e6795ee22c7d4ed1e31f9fcd7c"
	I1027 19:42:36.104038  617247 cri.go:89] found id: "1784845152b15e895d191df6003d8e4505e0deb1eb12dca53fdb508d01a0c382"
	I1027 19:42:36.104046  617247 cri.go:89] found id: "f095013b1fea34f4a0e54b3bc41fce7b3914256c3abf5dbba3bc51f30acfb4d3"
	I1027 19:42:36.104050  617247 cri.go:89] found id: ""
	I1027 19:42:36.104098  617247 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:42:36.119315  617247 out.go:203] 
	W1027 19:42:36.120662  617247 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:42:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:42:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 19:42:36.120686  617247 out.go:285] * 
	* 
	W1027 19:42:36.126329  617247 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 19:42:36.127581  617247 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-095885 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-095885
helpers_test.go:243: (dbg) docker inspect no-preload-095885:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4cc5fd138a234f7595c3ab65ba5a1ba3edb67bef1c67cdf1d9cf853e33a19613",
	        "Created": "2025-10-27T19:40:14.994574328Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 604791,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T19:41:33.811758412Z",
	            "FinishedAt": "2025-10-27T19:41:32.762280549Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/4cc5fd138a234f7595c3ab65ba5a1ba3edb67bef1c67cdf1d9cf853e33a19613/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4cc5fd138a234f7595c3ab65ba5a1ba3edb67bef1c67cdf1d9cf853e33a19613/hostname",
	        "HostsPath": "/var/lib/docker/containers/4cc5fd138a234f7595c3ab65ba5a1ba3edb67bef1c67cdf1d9cf853e33a19613/hosts",
	        "LogPath": "/var/lib/docker/containers/4cc5fd138a234f7595c3ab65ba5a1ba3edb67bef1c67cdf1d9cf853e33a19613/4cc5fd138a234f7595c3ab65ba5a1ba3edb67bef1c67cdf1d9cf853e33a19613-json.log",
	        "Name": "/no-preload-095885",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-095885:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-095885",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4cc5fd138a234f7595c3ab65ba5a1ba3edb67bef1c67cdf1d9cf853e33a19613",
	                "LowerDir": "/var/lib/docker/overlay2/3da4c71b650bdf8fc78ee58176e8542686fb887dd144b15140026baa7af00784-init/diff:/var/lib/docker/overlay2/71b61ec94610a35f2d924dec358052d4c154c36b3fe219802f60246ca2dc7f45/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3da4c71b650bdf8fc78ee58176e8542686fb887dd144b15140026baa7af00784/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3da4c71b650bdf8fc78ee58176e8542686fb887dd144b15140026baa7af00784/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3da4c71b650bdf8fc78ee58176e8542686fb887dd144b15140026baa7af00784/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-095885",
	                "Source": "/var/lib/docker/volumes/no-preload-095885/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-095885",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-095885",
	                "name.minikube.sigs.k8s.io": "no-preload-095885",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dbb88d9bfa3a617d013e6f772020de0a3a7a4c6492d664302183ff36f2769477",
	            "SandboxKey": "/var/run/docker/netns/dbb88d9bfa3a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-095885": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:9a:21:df:8f:ab",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0e1134f19412aeb25ca458bad13821f54c33ad8f2fba3617f69283b33058934f",
	                    "EndpointID": "3fb07083d639ea6220310fe8e716f54c0817a489c49f60dff18813d35670a898",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-095885",
	                        "4cc5fd138a23"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-095885 -n no-preload-095885
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-095885 -n no-preload-095885: exit status 2 (359.454328ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-095885 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-095885 logs -n 25: (1.327909829s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ functional-051715 image save kicbase/echo-server:functional-051715 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                                                               │ functional-051715            │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image   │ functional-051715 image rm kicbase/echo-server:functional-051715 --alsologtostderr                                                                                                                                                            │ functional-051715            │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ addons  │ enable dashboard -p embed-certs-919237 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ start   │ -p embed-certs-919237 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ image   │ old-k8s-version-468959 image list --format=json                                                                                                                                                                                               │ old-k8s-version-468959       │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ pause   │ -p old-k8s-version-468959 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-468959       │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-095885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │                     │
	│ stop    │ -p no-preload-095885 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ delete  │ -p old-k8s-version-468959                                                                                                                                                                                                                     │ old-k8s-version-468959       │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ delete  │ -p old-k8s-version-468959                                                                                                                                                                                                                     │ old-k8s-version-468959       │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ start   │ -p default-k8s-diff-port-813397 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-813397 │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:42 UTC │
	│ addons  │ enable dashboard -p no-preload-095885 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ start   │ -p no-preload-095885 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:42 UTC │
	│ image   │ embed-certs-919237 image list --format=json                                                                                                                                                                                                   │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ pause   │ -p embed-certs-919237 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │                     │
	│ delete  │ -p embed-certs-919237                                                                                                                                                                                                                         │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ delete  │ -p embed-certs-919237                                                                                                                                                                                                                         │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ start   │ -p newest-cni-677710 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-677710            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-813397 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-813397 │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-813397 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-813397 │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-813397 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-813397 │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ start   │ -p default-k8s-diff-port-813397 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-813397 │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	│ image   │ no-preload-095885 image list --format=json                                                                                                                                                                                                    │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ pause   │ -p no-preload-095885 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-677710 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-677710            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 19:42:33
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 19:42:33.033179  616341 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:42:33.033469  616341 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:42:33.033479  616341 out.go:374] Setting ErrFile to fd 2...
	I1027 19:42:33.033483  616341 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:42:33.033702  616341 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 19:42:33.034175  616341 out.go:368] Setting JSON to false
	I1027 19:42:33.035543  616341 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8702,"bootTime":1761585451,"procs":429,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 19:42:33.035662  616341 start.go:141] virtualization: kvm guest
	I1027 19:42:33.037878  616341 out.go:179] * [default-k8s-diff-port-813397] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 19:42:33.039545  616341 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 19:42:33.039579  616341 notify.go:220] Checking for updates...
	I1027 19:42:33.042285  616341 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 19:42:33.043786  616341 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:42:33.045229  616341 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-352833/.minikube
	I1027 19:42:33.046625  616341 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 19:42:33.048033  616341 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 19:42:33.049907  616341 config.go:182] Loaded profile config "default-k8s-diff-port-813397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:42:33.050564  616341 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 19:42:33.077018  616341 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 19:42:33.077130  616341 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:42:33.159675  616341 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-27 19:42:33.147250392 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:42:33.159849  616341 docker.go:318] overlay module found
	I1027 19:42:33.161916  616341 out.go:179] * Using the docker driver based on existing profile
	I1027 19:42:33.163412  616341 start.go:305] selected driver: docker
	I1027 19:42:33.163432  616341 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-813397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-813397 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:42:33.163533  616341 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 19:42:33.164108  616341 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:42:33.225747  616341 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-27 19:42:33.214711968 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:42:33.226122  616341 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 19:42:33.226193  616341 cni.go:84] Creating CNI manager for ""
	I1027 19:42:33.226284  616341 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:42:33.226391  616341 start.go:349] cluster config:
	{Name:default-k8s-diff-port-813397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-813397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:42:33.228420  616341 out.go:179] * Starting "default-k8s-diff-port-813397" primary control-plane node in "default-k8s-diff-port-813397" cluster
	I1027 19:42:33.229915  616341 cache.go:123] Beginning downloading kic base image for docker with crio
	I1027 19:42:33.231391  616341 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 19:42:33.232825  616341 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:42:33.232876  616341 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1027 19:42:33.232888  616341 cache.go:58] Caching tarball of preloaded images
	I1027 19:42:33.232950  616341 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 19:42:33.232994  616341 preload.go:233] Found /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1027 19:42:33.233005  616341 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 19:42:33.233127  616341 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/default-k8s-diff-port-813397/config.json ...
	I1027 19:42:33.255996  616341 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 19:42:33.256019  616341 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 19:42:33.256040  616341 cache.go:232] Successfully downloaded all kic artifacts
	I1027 19:42:33.256073  616341 start.go:360] acquireMachinesLock for default-k8s-diff-port-813397: {Name:mk62e4c852b8cd14691bbd6055f96686bc7465fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:42:33.256154  616341 start.go:364] duration metric: took 59.384µs to acquireMachinesLock for "default-k8s-diff-port-813397"
	I1027 19:42:33.256179  616341 start.go:96] Skipping create...Using existing machine configuration
	I1027 19:42:33.256186  616341 fix.go:54] fixHost starting: 
	I1027 19:42:33.256432  616341 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-813397 --format={{.State.Status}}
	I1027 19:42:33.274831  616341 fix.go:112] recreateIfNeeded on default-k8s-diff-port-813397: state=Stopped err=<nil>
	W1027 19:42:33.274866  616341 fix.go:138] unexpected machine state, will restart: <nil>
	I1027 19:42:32.886497  611121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:42:33.385998  611121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:42:33.885977  611121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:42:33.977454  611121 kubeadm.go:1113] duration metric: took 4.172132503s to wait for elevateKubeSystemPrivileges
	I1027 19:42:33.977594  611121 kubeadm.go:402] duration metric: took 15.032557693s to StartCluster
	I1027 19:42:33.977623  611121 settings.go:142] acquiring lock: {Name:mk8304c2106bf310642e0949fc0266ccb50f2f95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:42:33.977711  611121 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:42:33.979544  611121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/kubeconfig: {Name:mk24cbe512a6907c874f3fb7a85390a8f9fd2b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:42:33.979890  611121 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 19:42:33.979958  611121 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 19:42:33.980084  611121 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 19:42:33.980201  611121 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-677710"
	I1027 19:42:33.980221  611121 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-677710"
	I1027 19:42:33.980235  611121 config.go:182] Loaded profile config "newest-cni-677710": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:42:33.980266  611121 host.go:66] Checking if "newest-cni-677710" exists ...
	I1027 19:42:33.980313  611121 addons.go:69] Setting default-storageclass=true in profile "newest-cni-677710"
	I1027 19:42:33.980337  611121 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-677710"
	I1027 19:42:33.980715  611121 cli_runner.go:164] Run: docker container inspect newest-cni-677710 --format={{.State.Status}}
	I1027 19:42:33.980877  611121 cli_runner.go:164] Run: docker container inspect newest-cni-677710 --format={{.State.Status}}
	I1027 19:42:33.982020  611121 out.go:179] * Verifying Kubernetes components...
	I1027 19:42:33.984499  611121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:42:34.011780  611121 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:42:34.013344  611121 addons.go:238] Setting addon default-storageclass=true in "newest-cni-677710"
	I1027 19:42:34.013400  611121 host.go:66] Checking if "newest-cni-677710" exists ...
	I1027 19:42:34.013402  611121 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:42:34.013420  611121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 19:42:34.013487  611121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-677710
	I1027 19:42:34.014385  611121 cli_runner.go:164] Run: docker container inspect newest-cni-677710 --format={{.State.Status}}
	I1027 19:42:34.046765  611121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/newest-cni-677710/id_rsa Username:docker}
	I1027 19:42:34.054286  611121 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 19:42:34.054391  611121 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 19:42:34.054479  611121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-677710
	I1027 19:42:34.086751  611121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/newest-cni-677710/id_rsa Username:docker}
	I1027 19:42:34.105832  611121 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 19:42:34.155790  611121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:42:34.180884  611121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:42:34.212341  611121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 19:42:34.340378  611121 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1027 19:42:34.341554  611121 api_server.go:52] waiting for apiserver process to appear ...
	I1027 19:42:34.341630  611121 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:42:34.552625  611121 api_server.go:72] duration metric: took 572.613293ms to wait for apiserver process to appear ...
	I1027 19:42:34.552654  611121 api_server.go:88] waiting for apiserver healthz status ...
	I1027 19:42:34.552677  611121 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1027 19:42:34.559986  611121 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1027 19:42:34.563286  611121 api_server.go:141] control plane version: v1.34.1
	I1027 19:42:34.563316  611121 api_server.go:131] duration metric: took 10.654828ms to wait for apiserver health ...
	I1027 19:42:34.563326  611121 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 19:42:34.564458  611121 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1027 19:42:34.566324  611121 addons.go:514] duration metric: took 586.225241ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1027 19:42:34.571035  611121 system_pods.go:59] 8 kube-system pods found
	I1027 19:42:34.571078  611121 system_pods.go:61] "coredns-66bc5c9577-rv72d" [e5a10932-4bc9-46fc-920e-ead5c8e9b60b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1027 19:42:34.571092  611121 system_pods.go:61] "etcd-newest-cni-677710" [f4a7e071-86a0-40bb-a31a-8db8b73950cf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 19:42:34.571105  611121 system_pods.go:61] "kindnet-w6m47" [e1b6e2a6-b271-4a01-8cfe-c10f73bd2f4d] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1027 19:42:34.571116  611121 system_pods.go:61] "kube-apiserver-newest-cni-677710" [ee755cc7-7067-47ba-b521-590f8a5bfca3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 19:42:34.571126  611121 system_pods.go:61] "kube-controller-manager-newest-cni-677710" [e2534c35-6228-4ef5-8d6e-8d48cbe0e9e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 19:42:34.571167  611121 system_pods.go:61] "kube-proxy-zg8ds" [89658cd8-0d1d-4a33-a913-add5cbd50df0] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1027 19:42:34.571176  611121 system_pods.go:61] "kube-scheduler-newest-cni-677710" [6324c705-2fd3-40db-b475-3c077531b1a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 19:42:34.571184  611121 system_pods.go:61] "storage-provisioner" [5f120e58-40b3-4814-9025-3a7bc86197ab] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1027 19:42:34.571194  611121 system_pods.go:74] duration metric: took 7.859799ms to wait for pod list to return data ...
	I1027 19:42:34.571205  611121 default_sa.go:34] waiting for default service account to be created ...
	I1027 19:42:34.574444  611121 default_sa.go:45] found service account: "default"
	I1027 19:42:34.574484  611121 default_sa.go:55] duration metric: took 3.269705ms for default service account to be created ...
	I1027 19:42:34.574502  611121 kubeadm.go:586] duration metric: took 594.496871ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1027 19:42:34.574525  611121 node_conditions.go:102] verifying NodePressure condition ...
	I1027 19:42:34.577783  611121 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1027 19:42:34.577813  611121 node_conditions.go:123] node cpu capacity is 8
	I1027 19:42:34.577828  611121 node_conditions.go:105] duration metric: took 3.297767ms to run NodePressure ...
	I1027 19:42:34.577842  611121 start.go:241] waiting for startup goroutines ...
	I1027 19:42:34.844625  611121 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-677710" context rescaled to 1 replicas
	I1027 19:42:34.844653  611121 start.go:246] waiting for cluster config update ...
	I1027 19:42:34.844666  611121 start.go:255] writing updated cluster config ...
	I1027 19:42:34.845011  611121 ssh_runner.go:195] Run: rm -f paused
	I1027 19:42:34.903467  611121 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 19:42:34.905349  611121 out.go:179] * Done! kubectl is now configured to use "newest-cni-677710" cluster and "default" namespace by default
	I1027 19:42:33.462205  565798 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1027 19:42:33.462679  565798 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1027 19:42:33.462744  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:42:33.462802  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:42:33.493229  565798 cri.go:89] found id: "ca67cda12e0adb415e229ae9e136a15743c92bb79ef8987bb33523c43775a99e"
	I1027 19:42:33.493253  565798 cri.go:89] found id: ""
	I1027 19:42:33.493262  565798 logs.go:282] 1 containers: [ca67cda12e0adb415e229ae9e136a15743c92bb79ef8987bb33523c43775a99e]
	I1027 19:42:33.493314  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:42:33.497447  565798 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:42:33.497523  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:42:33.527781  565798 cri.go:89] found id: ""
	I1027 19:42:33.527812  565798 logs.go:282] 0 containers: []
	W1027 19:42:33.527823  565798 logs.go:284] No container was found matching "etcd"
	I1027 19:42:33.527831  565798 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:42:33.527883  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:42:33.562474  565798 cri.go:89] found id: ""
	I1027 19:42:33.562504  565798 logs.go:282] 0 containers: []
	W1027 19:42:33.562514  565798 logs.go:284] No container was found matching "coredns"
	I1027 19:42:33.562522  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:42:33.562569  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:42:33.597935  565798 cri.go:89] found id: "15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:42:33.597957  565798 cri.go:89] found id: ""
	I1027 19:42:33.597968  565798 logs.go:282] 1 containers: [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8]
	I1027 19:42:33.598031  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:42:33.602236  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:42:33.602306  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:42:33.631919  565798 cri.go:89] found id: ""
	I1027 19:42:33.631949  565798 logs.go:282] 0 containers: []
	W1027 19:42:33.631960  565798 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:42:33.631968  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:42:33.632030  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:42:33.662333  565798 cri.go:89] found id: "4b0186426a494845ce9fa7af7755d0c2f9549f935b11a34bd738219dd3bfd4f5"
	I1027 19:42:33.662361  565798 cri.go:89] found id: ""
	I1027 19:42:33.662375  565798 logs.go:282] 1 containers: [4b0186426a494845ce9fa7af7755d0c2f9549f935b11a34bd738219dd3bfd4f5]
	I1027 19:42:33.662437  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:42:33.666731  565798 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:42:33.666814  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:42:33.694939  565798 cri.go:89] found id: ""
	I1027 19:42:33.694962  565798 logs.go:282] 0 containers: []
	W1027 19:42:33.694970  565798 logs.go:284] No container was found matching "kindnet"
	I1027 19:42:33.694978  565798 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:42:33.695030  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:42:33.731061  565798 cri.go:89] found id: ""
	I1027 19:42:33.731090  565798 logs.go:282] 0 containers: []
	W1027 19:42:33.731101  565798 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:42:33.731113  565798 logs.go:123] Gathering logs for kube-apiserver [ca67cda12e0adb415e229ae9e136a15743c92bb79ef8987bb33523c43775a99e] ...
	I1027 19:42:33.731164  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca67cda12e0adb415e229ae9e136a15743c92bb79ef8987bb33523c43775a99e"
	I1027 19:42:33.766344  565798 logs.go:123] Gathering logs for kube-scheduler [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8] ...
	I1027 19:42:33.766395  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:42:33.845064  565798 logs.go:123] Gathering logs for kube-controller-manager [4b0186426a494845ce9fa7af7755d0c2f9549f935b11a34bd738219dd3bfd4f5] ...
	I1027 19:42:33.845110  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4b0186426a494845ce9fa7af7755d0c2f9549f935b11a34bd738219dd3bfd4f5"
	I1027 19:42:33.888734  565798 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:42:33.888770  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:42:33.961756  565798 logs.go:123] Gathering logs for container status ...
	I1027 19:42:33.961800  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:42:34.008024  565798 logs.go:123] Gathering logs for kubelet ...
	I1027 19:42:34.008068  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:42:34.169342  565798 logs.go:123] Gathering logs for dmesg ...
	I1027 19:42:34.169403  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:42:34.195362  565798 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:42:34.195402  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:42:34.291353  565798 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	
	
	==> CRI-O <==
	Oct 27 19:42:12 no-preload-095885 crio[562]: time="2025-10-27T19:42:12.759978662Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:42:12 no-preload-095885 crio[562]: time="2025-10-27T19:42:12.760617487Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:42:12 no-preload-095885 crio[562]: time="2025-10-27T19:42:12.93657649Z" level=info msg="Created container 1784845152b15e895d191df6003d8e4505e0deb1eb12dca53fdb508d01a0c382: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-v74pg/dashboard-metrics-scraper" id=0f7390d2-a3ed-4faf-863e-0d97c12fd79e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:42:12 no-preload-095885 crio[562]: time="2025-10-27T19:42:12.937434547Z" level=info msg="Starting container: 1784845152b15e895d191df6003d8e4505e0deb1eb12dca53fdb508d01a0c382" id=940ab09c-4321-430f-9580-a208f2cc0eb6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:42:12 no-preload-095885 crio[562]: time="2025-10-27T19:42:12.940305865Z" level=info msg="Started container" PID=1732 containerID=1784845152b15e895d191df6003d8e4505e0deb1eb12dca53fdb508d01a0c382 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-v74pg/dashboard-metrics-scraper id=940ab09c-4321-430f-9580-a208f2cc0eb6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fd4e5435c230028ac18f9956b8e0f900af19cdd21a0788ccb18d07b4fef883d4
	Oct 27 19:42:13 no-preload-095885 crio[562]: time="2025-10-27T19:42:13.725571092Z" level=info msg="Removing container: 11b3a2def7cd16147c055529bb6d7e829e50e30c726e7f0d9fa487ee900d163a" id=5dc528a1-badc-4753-b111-6f8d23bcb8bd name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 19:42:13 no-preload-095885 crio[562]: time="2025-10-27T19:42:13.749645263Z" level=info msg="Removed container 11b3a2def7cd16147c055529bb6d7e829e50e30c726e7f0d9fa487ee900d163a: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-v74pg/dashboard-metrics-scraper" id=5dc528a1-badc-4753-b111-6f8d23bcb8bd name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 19:42:14 no-preload-095885 crio[562]: time="2025-10-27T19:42:14.728651253Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5e48368f-7a82-4c55-a33c-de9a44cdec34 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:42:14 no-preload-095885 crio[562]: time="2025-10-27T19:42:14.729909092Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8d7a92c4-d1a4-435c-8119-8bfee147dfb5 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:42:14 no-preload-095885 crio[562]: time="2025-10-27T19:42:14.731986366Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=a4ce08ee-6574-458a-913c-deef53479d64 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:42:14 no-preload-095885 crio[562]: time="2025-10-27T19:42:14.732125975Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:42:14 no-preload-095885 crio[562]: time="2025-10-27T19:42:14.738360832Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:42:14 no-preload-095885 crio[562]: time="2025-10-27T19:42:14.738586993Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e38a0f4fc20d1bdd18cce94d58edc6c20f5d27eecb1a020c291b1e3c01dd01d9/merged/etc/passwd: no such file or directory"
	Oct 27 19:42:14 no-preload-095885 crio[562]: time="2025-10-27T19:42:14.73862605Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e38a0f4fc20d1bdd18cce94d58edc6c20f5d27eecb1a020c291b1e3c01dd01d9/merged/etc/group: no such file or directory"
	Oct 27 19:42:14 no-preload-095885 crio[562]: time="2025-10-27T19:42:14.738955394Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:42:14 no-preload-095885 crio[562]: time="2025-10-27T19:42:14.776420554Z" level=info msg="Created container 90f66b7e123c368c03fba5eba51565bbc9522c44deaa2e2decbf48428f0a1e87: kube-system/storage-provisioner/storage-provisioner" id=a4ce08ee-6574-458a-913c-deef53479d64 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:42:14 no-preload-095885 crio[562]: time="2025-10-27T19:42:14.777377101Z" level=info msg="Starting container: 90f66b7e123c368c03fba5eba51565bbc9522c44deaa2e2decbf48428f0a1e87" id=9f80b1b8-f4b4-4f25-ad1c-266a1c9c4658 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:42:14 no-preload-095885 crio[562]: time="2025-10-27T19:42:14.779583993Z" level=info msg="Started container" PID=1746 containerID=90f66b7e123c368c03fba5eba51565bbc9522c44deaa2e2decbf48428f0a1e87 description=kube-system/storage-provisioner/storage-provisioner id=9f80b1b8-f4b4-4f25-ad1c-266a1c9c4658 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c0a24a317520bb34a78fb665332f5f4f86c8bbed7a4d6ff30ea8c98fc06d352b
	Oct 27 19:42:34 no-preload-095885 crio[562]: time="2025-10-27T19:42:34.591449938Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0d4802f9-4458-46ca-9118-2e1f45f9d427 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:42:34 no-preload-095885 crio[562]: time="2025-10-27T19:42:34.593081325Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e3fa69e6-f967-402d-abbd-4bb7efe520b7 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:42:34 no-preload-095885 crio[562]: time="2025-10-27T19:42:34.59421193Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-v74pg/dashboard-metrics-scraper" id=edc82ed4-598c-4956-8daf-2f706f9fab30 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:42:34 no-preload-095885 crio[562]: time="2025-10-27T19:42:34.594363797Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:42:34 no-preload-095885 crio[562]: time="2025-10-27T19:42:34.60227058Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:42:34 no-preload-095885 crio[562]: time="2025-10-27T19:42:34.602988482Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:42:34 no-preload-095885 crio[562]: time="2025-10-27T19:42:34.771694806Z" level=info msg="CreateCtr: context was either canceled or the deadline was exceeded: context canceled" id=edc82ed4-598c-4956-8daf-2f706f9fab30 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	90f66b7e123c3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   c0a24a317520b       storage-provisioner                          kube-system
	1784845152b15       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago      Exited              dashboard-metrics-scraper   2                   fd4e5435c2300       dashboard-metrics-scraper-6ffb444bf9-v74pg   kubernetes-dashboard
	f095013b1fea3       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   5882654da0b5a       kubernetes-dashboard-855c9754f9-dqcbh        kubernetes-dashboard
	bdd012f57e645       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           53 seconds ago      Running             coredns                     0                   9b7d174ccc0c5       coredns-66bc5c9577-gwqvg                     kube-system
	9ff1aaf9ba79c       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   02750f01b1627       busybox                                      default
	dfd3413fa181f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   c0a24a317520b       storage-provisioner                          kube-system
	44fc145d6f991       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   e39e7607cd0f6       kindnet-8lbz5                                kube-system
	5697b5794786e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           53 seconds ago      Running             kube-proxy                  0                   b68433f12787b       kube-proxy-wz64m                             kube-system
	5cea35874d5ac       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           56 seconds ago      Running             etcd                        0                   d8e39758cf5a6       etcd-no-preload-095885                       kube-system
	6027c707b2e64       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           56 seconds ago      Running             kube-controller-manager     0                   9cf345ea4c97a       kube-controller-manager-no-preload-095885    kube-system
	b35fe833b6d52       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           56 seconds ago      Running             kube-scheduler              0                   222549238feed       kube-scheduler-no-preload-095885             kube-system
	781c3a34fe9cc       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           56 seconds ago      Running             kube-apiserver              0                   dda2211790c80       kube-apiserver-no-preload-095885             kube-system
	
	
	==> coredns [bdd012f57e645223267c73f71de660efe4e4214e579bda4ce609049f9287d78b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37265 - 19078 "HINFO IN 2462662711656140191.3275751827932675110. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.082285381s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-095885
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-095885
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=no-preload-095885
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T19_40_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 19:40:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-095885
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 19:42:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 19:42:13 +0000   Mon, 27 Oct 2025 19:40:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 19:42:13 +0000   Mon, 27 Oct 2025 19:40:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 19:42:13 +0000   Mon, 27 Oct 2025 19:40:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 19:42:13 +0000   Mon, 27 Oct 2025 19:41:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-095885
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                71cd584e-1032-4c4b-a2da-7d2af7ed7a93
	  Boot ID:                    811bd29c-e64e-4acc-9427-bab1f7caed93
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-gwqvg                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-no-preload-095885                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-8lbz5                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-no-preload-095885              250m (3%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-no-preload-095885     200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-wz64m                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-no-preload-095885              100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-v74pg    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-dqcbh         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 107s               kube-proxy       
	  Normal  Starting                 53s                kube-proxy       
	  Normal  NodeHasSufficientMemory  113s               kubelet          Node no-preload-095885 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s               kubelet          Node no-preload-095885 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s               kubelet          Node no-preload-095885 status is now: NodeHasSufficientPID
	  Normal  Starting                 113s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s               node-controller  Node no-preload-095885 event: Registered Node no-preload-095885 in Controller
	  Normal  NodeReady                95s                kubelet          Node no-preload-095885 status is now: NodeReady
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)  kubelet          Node no-preload-095885 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)  kubelet          Node no-preload-095885 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)  kubelet          Node no-preload-095885 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                node-controller  Node no-preload-095885 event: Registered Node no-preload-095885 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 23 52 43 9a ba 08 06
	[  +0.000398] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 50 95 0e df 53 08 06
	[Oct27 18:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.017295] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023893] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023882] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023851] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +2.047849] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +4.031592] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +8.319143] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[ +16.382183] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[Oct27 19:00] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	
	
	==> etcd [5cea35874d5acf206b55e45b05f38d78ea9509d27b883c670c280fce93719392] <==
	{"level":"warn","ts":"2025-10-27T19:41:42.358072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.366704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.377885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.394108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.401858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.410291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.416993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.424033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.432463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.443380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.451538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.458807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.480824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.489002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.498786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.507345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.515911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.525017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.533727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.542418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.550349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.566910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.575891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.584174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.651905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34156","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:42:37 up  2:25,  0 user,  load average: 4.23, 3.45, 2.24
	Linux no-preload-095885 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [44fc145d6f9918f3db309fd6e1b253a09d9c17767f2425460e6e412e11200fcf] <==
	I1027 19:41:44.160036       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 19:41:44.250261       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1027 19:41:44.250503       1 main.go:148] setting mtu 1500 for CNI 
	I1027 19:41:44.250530       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 19:41:44.250560       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T19:41:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 19:41:44.456185       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 19:41:44.456251       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 19:41:44.456269       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 19:41:44.456473       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1027 19:41:44.856435       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 19:41:44.856468       1 metrics.go:72] Registering metrics
	I1027 19:41:44.856542       1 controller.go:711] "Syncing nftables rules"
	I1027 19:41:54.456516       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 19:41:54.456612       1 main.go:301] handling current node
	I1027 19:42:04.456947       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 19:42:04.456997       1 main.go:301] handling current node
	I1027 19:42:14.456853       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 19:42:14.456924       1 main.go:301] handling current node
	I1027 19:42:24.456348       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 19:42:24.456394       1 main.go:301] handling current node
	I1027 19:42:34.464985       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 19:42:34.465023       1 main.go:301] handling current node
	
	
	==> kube-apiserver [781c3a34fe9cc4350ebd3342ca9b66e12ce9f3e6795ee22c7d4ed1e31f9fcd7c] <==
	I1027 19:41:43.242632       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1027 19:41:43.242742       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1027 19:41:43.242760       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1027 19:41:43.242910       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1027 19:41:43.243726       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 19:41:43.242797       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1027 19:41:43.242889       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1027 19:41:43.243418       1 aggregator.go:171] initial CRD sync complete...
	I1027 19:41:43.244541       1 autoregister_controller.go:144] Starting autoregister controller
	I1027 19:41:43.244561       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 19:41:43.244569       1 cache.go:39] Caches are synced for autoregister controller
	I1027 19:41:43.253768       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 19:41:43.266156       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 19:41:43.275846       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 19:41:43.571321       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 19:41:43.598283       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 19:41:43.626463       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 19:41:43.679847       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 19:41:43.693316       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 19:41:43.761114       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.87.219"}
	I1027 19:41:43.773862       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.53.143"}
	I1027 19:41:44.149906       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 19:41:46.730056       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 19:41:46.832791       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 19:41:46.880377       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [6027c707b2e6435987becfbc61cef802217623f703bccb12bb5716bc98c873a9] <==
	I1027 19:41:46.377008       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 19:41:46.376994       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 19:41:46.377055       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1027 19:41:46.377127       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1027 19:41:46.377244       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1027 19:41:46.377164       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1027 19:41:46.377668       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1027 19:41:46.377739       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1027 19:41:46.377670       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 19:41:46.377994       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 19:41:46.378020       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 19:41:46.378106       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 19:41:46.378116       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1027 19:41:46.379945       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1027 19:41:46.380328       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 19:41:46.383194       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 19:41:46.383194       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 19:41:46.385359       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 19:41:46.385464       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 19:41:46.385563       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-095885"
	I1027 19:41:46.385589       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 19:41:46.385602       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1027 19:41:46.388840       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 19:41:46.388862       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1027 19:41:46.405092       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [5697b5794786ef7e3e2b6adc476b065be3213886077b1efb7ec8a11a1893a554] <==
	I1027 19:41:43.998567       1 server_linux.go:53] "Using iptables proxy"
	I1027 19:41:44.062148       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 19:41:44.163237       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 19:41:44.163348       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1027 19:41:44.163470       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 19:41:44.185514       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 19:41:44.185572       1 server_linux.go:132] "Using iptables Proxier"
	I1027 19:41:44.191021       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 19:41:44.191512       1 server.go:527] "Version info" version="v1.34.1"
	I1027 19:41:44.191533       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:41:44.192818       1 config.go:200] "Starting service config controller"
	I1027 19:41:44.192845       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 19:41:44.192885       1 config.go:106] "Starting endpoint slice config controller"
	I1027 19:41:44.192893       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 19:41:44.192910       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 19:41:44.192923       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 19:41:44.192947       1 config.go:309] "Starting node config controller"
	I1027 19:41:44.192952       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 19:41:44.192958       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 19:41:44.293935       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 19:41:44.293956       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 19:41:44.293984       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [b35fe833b6d5250c5b516a89c49b8f3808e23967fa3f1a0150b2cd20ac6d55ea] <==
	I1027 19:41:41.893417       1 serving.go:386] Generated self-signed cert in-memory
	W1027 19:41:43.170447       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1027 19:41:43.170516       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1027 19:41:43.170531       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1027 19:41:43.170541       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1027 19:41:43.215620       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 19:41:43.215746       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:41:43.219494       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:41:43.219601       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:41:43.220679       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 19:41:43.226488       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 19:41:43.320583       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 19:41:47 no-preload-095885 kubelet[701]: I1027 19:41:47.118286     701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdzs8\" (UniqueName: \"kubernetes.io/projected/0f07f163-c30e-4605-a6fa-68364ac4eff8-kube-api-access-cdzs8\") pod \"kubernetes-dashboard-855c9754f9-dqcbh\" (UID: \"0f07f163-c30e-4605-a6fa-68364ac4eff8\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dqcbh"
	Oct 27 19:41:47 no-preload-095885 kubelet[701]: I1027 19:41:47.118331     701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9ff5d868-93bc-4049-b36c-99bc791224db-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-v74pg\" (UID: \"9ff5d868-93bc-4049-b36c-99bc791224db\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-v74pg"
	Oct 27 19:41:49 no-preload-095885 kubelet[701]: I1027 19:41:49.654317     701 scope.go:117] "RemoveContainer" containerID="c0550830245383e523b5340786789edb0df00da68110989fb49d9f4951f9f50a"
	Oct 27 19:41:50 no-preload-095885 kubelet[701]: I1027 19:41:50.659273     701 scope.go:117] "RemoveContainer" containerID="c0550830245383e523b5340786789edb0df00da68110989fb49d9f4951f9f50a"
	Oct 27 19:41:50 no-preload-095885 kubelet[701]: I1027 19:41:50.659441     701 scope.go:117] "RemoveContainer" containerID="11b3a2def7cd16147c055529bb6d7e829e50e30c726e7f0d9fa487ee900d163a"
	Oct 27 19:41:50 no-preload-095885 kubelet[701]: E1027 19:41:50.659675     701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-v74pg_kubernetes-dashboard(9ff5d868-93bc-4049-b36c-99bc791224db)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-v74pg" podUID="9ff5d868-93bc-4049-b36c-99bc791224db"
	Oct 27 19:41:50 no-preload-095885 kubelet[701]: I1027 19:41:50.817954     701 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 27 19:41:51 no-preload-095885 kubelet[701]: I1027 19:41:51.664216     701 scope.go:117] "RemoveContainer" containerID="11b3a2def7cd16147c055529bb6d7e829e50e30c726e7f0d9fa487ee900d163a"
	Oct 27 19:41:51 no-preload-095885 kubelet[701]: E1027 19:41:51.664380     701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-v74pg_kubernetes-dashboard(9ff5d868-93bc-4049-b36c-99bc791224db)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-v74pg" podUID="9ff5d868-93bc-4049-b36c-99bc791224db"
	Oct 27 19:41:53 no-preload-095885 kubelet[701]: I1027 19:41:53.681638     701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dqcbh" podStartSLOduration=1.6106176890000001 podStartE2EDuration="7.68161494s" podCreationTimestamp="2025-10-27 19:41:46 +0000 UTC" firstStartedPulling="2025-10-27 19:41:47.330870363 +0000 UTC m=+6.886692489" lastFinishedPulling="2025-10-27 19:41:53.401867597 +0000 UTC m=+12.957689740" observedRunningTime="2025-10-27 19:41:53.681262807 +0000 UTC m=+13.237084956" watchObservedRunningTime="2025-10-27 19:41:53.68161494 +0000 UTC m=+13.237437088"
	Oct 27 19:41:59 no-preload-095885 kubelet[701]: I1027 19:41:59.232034     701 scope.go:117] "RemoveContainer" containerID="11b3a2def7cd16147c055529bb6d7e829e50e30c726e7f0d9fa487ee900d163a"
	Oct 27 19:41:59 no-preload-095885 kubelet[701]: E1027 19:41:59.232252     701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-v74pg_kubernetes-dashboard(9ff5d868-93bc-4049-b36c-99bc791224db)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-v74pg" podUID="9ff5d868-93bc-4049-b36c-99bc791224db"
	Oct 27 19:42:12 no-preload-095885 kubelet[701]: I1027 19:42:12.590580     701 scope.go:117] "RemoveContainer" containerID="11b3a2def7cd16147c055529bb6d7e829e50e30c726e7f0d9fa487ee900d163a"
	Oct 27 19:42:13 no-preload-095885 kubelet[701]: I1027 19:42:13.723676     701 scope.go:117] "RemoveContainer" containerID="11b3a2def7cd16147c055529bb6d7e829e50e30c726e7f0d9fa487ee900d163a"
	Oct 27 19:42:13 no-preload-095885 kubelet[701]: I1027 19:42:13.723923     701 scope.go:117] "RemoveContainer" containerID="1784845152b15e895d191df6003d8e4505e0deb1eb12dca53fdb508d01a0c382"
	Oct 27 19:42:13 no-preload-095885 kubelet[701]: E1027 19:42:13.724106     701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-v74pg_kubernetes-dashboard(9ff5d868-93bc-4049-b36c-99bc791224db)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-v74pg" podUID="9ff5d868-93bc-4049-b36c-99bc791224db"
	Oct 27 19:42:14 no-preload-095885 kubelet[701]: I1027 19:42:14.728191     701 scope.go:117] "RemoveContainer" containerID="dfd3413fa181f285a1eacee389efc4b492f13e6936b46ef5bb030474a125d597"
	Oct 27 19:42:19 no-preload-095885 kubelet[701]: I1027 19:42:19.232446     701 scope.go:117] "RemoveContainer" containerID="1784845152b15e895d191df6003d8e4505e0deb1eb12dca53fdb508d01a0c382"
	Oct 27 19:42:19 no-preload-095885 kubelet[701]: E1027 19:42:19.232637     701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-v74pg_kubernetes-dashboard(9ff5d868-93bc-4049-b36c-99bc791224db)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-v74pg" podUID="9ff5d868-93bc-4049-b36c-99bc791224db"
	Oct 27 19:42:34 no-preload-095885 kubelet[701]: I1027 19:42:34.590881     701 scope.go:117] "RemoveContainer" containerID="1784845152b15e895d191df6003d8e4505e0deb1eb12dca53fdb508d01a0c382"
	Oct 27 19:42:34 no-preload-095885 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 19:42:34 no-preload-095885 kubelet[701]: I1027 19:42:34.750864     701 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 27 19:42:34 no-preload-095885 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 19:42:34 no-preload-095885 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 27 19:42:34 no-preload-095885 systemd[1]: kubelet.service: Consumed 1.872s CPU time.
	
	
	==> kubernetes-dashboard [f095013b1fea34f4a0e54b3bc41fce7b3914256c3abf5dbba3bc51f30acfb4d3] <==
	2025/10/27 19:41:53 Using namespace: kubernetes-dashboard
	2025/10/27 19:41:53 Using in-cluster config to connect to apiserver
	2025/10/27 19:41:53 Using secret token for csrf signing
	2025/10/27 19:41:53 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/27 19:41:53 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/27 19:41:53 Successful initial request to the apiserver, version: v1.34.1
	2025/10/27 19:41:53 Generating JWE encryption key
	2025/10/27 19:41:53 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/27 19:41:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/27 19:41:53 Initializing JWE encryption key from synchronized object
	2025/10/27 19:41:53 Creating in-cluster Sidecar client
	2025/10/27 19:41:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 19:41:53 Serving insecurely on HTTP port: 9090
	2025/10/27 19:42:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 19:41:53 Starting overwatch
	
	
	==> storage-provisioner [90f66b7e123c368c03fba5eba51565bbc9522c44deaa2e2decbf48428f0a1e87] <==
	I1027 19:42:14.792065       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1027 19:42:14.801409       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 19:42:14.801477       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1027 19:42:14.803994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:42:18.259846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:42:22.520683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:42:26.120450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:42:29.174621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:42:32.197030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:42:32.202081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 19:42:32.202284       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 19:42:32.202454       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-095885_52ac8a0a-6502-4eec-889c-7b9d07620ffc!
	I1027 19:42:32.202526       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a17df180-0dc3-44e5-84d2-7fe25e687623", APIVersion:"v1", ResourceVersion:"626", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-095885_52ac8a0a-6502-4eec-889c-7b9d07620ffc became leader
	W1027 19:42:32.204729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:42:32.211052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 19:42:32.303525       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-095885_52ac8a0a-6502-4eec-889c-7b9d07620ffc!
	W1027 19:42:34.215838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:42:34.222439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:42:36.226310       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:42:36.231679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [dfd3413fa181f285a1eacee389efc4b492f13e6936b46ef5bb030474a125d597] <==
	I1027 19:41:43.970217       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1027 19:42:13.972684       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-095885 -n no-preload-095885
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-095885 -n no-preload-095885: exit status 2 (355.331306ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-095885 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-095885
helpers_test.go:243: (dbg) docker inspect no-preload-095885:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4cc5fd138a234f7595c3ab65ba5a1ba3edb67bef1c67cdf1d9cf853e33a19613",
	        "Created": "2025-10-27T19:40:14.994574328Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 604791,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T19:41:33.811758412Z",
	            "FinishedAt": "2025-10-27T19:41:32.762280549Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/4cc5fd138a234f7595c3ab65ba5a1ba3edb67bef1c67cdf1d9cf853e33a19613/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4cc5fd138a234f7595c3ab65ba5a1ba3edb67bef1c67cdf1d9cf853e33a19613/hostname",
	        "HostsPath": "/var/lib/docker/containers/4cc5fd138a234f7595c3ab65ba5a1ba3edb67bef1c67cdf1d9cf853e33a19613/hosts",
	        "LogPath": "/var/lib/docker/containers/4cc5fd138a234f7595c3ab65ba5a1ba3edb67bef1c67cdf1d9cf853e33a19613/4cc5fd138a234f7595c3ab65ba5a1ba3edb67bef1c67cdf1d9cf853e33a19613-json.log",
	        "Name": "/no-preload-095885",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-095885:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-095885",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4cc5fd138a234f7595c3ab65ba5a1ba3edb67bef1c67cdf1d9cf853e33a19613",
	                "LowerDir": "/var/lib/docker/overlay2/3da4c71b650bdf8fc78ee58176e8542686fb887dd144b15140026baa7af00784-init/diff:/var/lib/docker/overlay2/71b61ec94610a35f2d924dec358052d4c154c36b3fe219802f60246ca2dc7f45/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3da4c71b650bdf8fc78ee58176e8542686fb887dd144b15140026baa7af00784/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3da4c71b650bdf8fc78ee58176e8542686fb887dd144b15140026baa7af00784/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3da4c71b650bdf8fc78ee58176e8542686fb887dd144b15140026baa7af00784/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-095885",
	                "Source": "/var/lib/docker/volumes/no-preload-095885/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-095885",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-095885",
	                "name.minikube.sigs.k8s.io": "no-preload-095885",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dbb88d9bfa3a617d013e6f772020de0a3a7a4c6492d664302183ff36f2769477",
	            "SandboxKey": "/var/run/docker/netns/dbb88d9bfa3a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-095885": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "26:9a:21:df:8f:ab",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0e1134f19412aeb25ca458bad13821f54c33ad8f2fba3617f69283b33058934f",
	                    "EndpointID": "3fb07083d639ea6220310fe8e716f54c0817a489c49f60dff18813d35670a898",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-095885",
	                        "4cc5fd138a23"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-095885 -n no-preload-095885
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-095885 -n no-preload-095885: exit status 2 (372.524256ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-095885 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-095885 logs -n 25: (1.236194834s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ functional-051715 image rm kicbase/echo-server:functional-051715 --alsologtostderr                                                                                                                                                            │ functional-051715            │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ addons  │ enable dashboard -p embed-certs-919237 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ start   │ -p embed-certs-919237 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ image   │ old-k8s-version-468959 image list --format=json                                                                                                                                                                                               │ old-k8s-version-468959       │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ pause   │ -p old-k8s-version-468959 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-468959       │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-095885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │                     │
	│ stop    │ -p no-preload-095885 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ delete  │ -p old-k8s-version-468959                                                                                                                                                                                                                     │ old-k8s-version-468959       │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ delete  │ -p old-k8s-version-468959                                                                                                                                                                                                                     │ old-k8s-version-468959       │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ start   │ -p default-k8s-diff-port-813397 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-813397 │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:42 UTC │
	│ addons  │ enable dashboard -p no-preload-095885 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ start   │ -p no-preload-095885 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:42 UTC │
	│ image   │ embed-certs-919237 image list --format=json                                                                                                                                                                                                   │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ pause   │ -p embed-certs-919237 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │                     │
	│ delete  │ -p embed-certs-919237                                                                                                                                                                                                                         │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ delete  │ -p embed-certs-919237                                                                                                                                                                                                                         │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ start   │ -p newest-cni-677710 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-677710            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-813397 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-813397 │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-813397 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-813397 │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-813397 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-813397 │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ start   │ -p default-k8s-diff-port-813397 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-813397 │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	│ image   │ no-preload-095885 image list --format=json                                                                                                                                                                                                    │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ pause   │ -p no-preload-095885 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-677710 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-677710            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	│ stop    │ -p newest-cni-677710 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-677710            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 19:42:33
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 19:42:33.033179  616341 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:42:33.033469  616341 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:42:33.033479  616341 out.go:374] Setting ErrFile to fd 2...
	I1027 19:42:33.033483  616341 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:42:33.033702  616341 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 19:42:33.034175  616341 out.go:368] Setting JSON to false
	I1027 19:42:33.035543  616341 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8702,"bootTime":1761585451,"procs":429,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 19:42:33.035662  616341 start.go:141] virtualization: kvm guest
	I1027 19:42:33.037878  616341 out.go:179] * [default-k8s-diff-port-813397] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 19:42:33.039545  616341 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 19:42:33.039579  616341 notify.go:220] Checking for updates...
	I1027 19:42:33.042285  616341 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 19:42:33.043786  616341 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:42:33.045229  616341 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-352833/.minikube
	I1027 19:42:33.046625  616341 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 19:42:33.048033  616341 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 19:42:33.049907  616341 config.go:182] Loaded profile config "default-k8s-diff-port-813397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:42:33.050564  616341 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 19:42:33.077018  616341 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 19:42:33.077130  616341 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:42:33.159675  616341 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-27 19:42:33.147250392 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:42:33.159849  616341 docker.go:318] overlay module found
	I1027 19:42:33.161916  616341 out.go:179] * Using the docker driver based on existing profile
	I1027 19:42:33.163412  616341 start.go:305] selected driver: docker
	I1027 19:42:33.163432  616341 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-813397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-813397 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:42:33.163533  616341 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 19:42:33.164108  616341 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:42:33.225747  616341 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-27 19:42:33.214711968 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:42:33.226122  616341 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 19:42:33.226193  616341 cni.go:84] Creating CNI manager for ""
	I1027 19:42:33.226284  616341 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:42:33.226391  616341 start.go:349] cluster config:
	{Name:default-k8s-diff-port-813397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-813397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:42:33.228420  616341 out.go:179] * Starting "default-k8s-diff-port-813397" primary control-plane node in "default-k8s-diff-port-813397" cluster
	I1027 19:42:33.229915  616341 cache.go:123] Beginning downloading kic base image for docker with crio
	I1027 19:42:33.231391  616341 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 19:42:33.232825  616341 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:42:33.232876  616341 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1027 19:42:33.232888  616341 cache.go:58] Caching tarball of preloaded images
	I1027 19:42:33.232950  616341 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 19:42:33.232994  616341 preload.go:233] Found /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1027 19:42:33.233005  616341 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 19:42:33.233127  616341 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/default-k8s-diff-port-813397/config.json ...
	I1027 19:42:33.255996  616341 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 19:42:33.256019  616341 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 19:42:33.256040  616341 cache.go:232] Successfully downloaded all kic artifacts
	I1027 19:42:33.256073  616341 start.go:360] acquireMachinesLock for default-k8s-diff-port-813397: {Name:mk62e4c852b8cd14691bbd6055f96686bc7465fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:42:33.256154  616341 start.go:364] duration metric: took 59.384µs to acquireMachinesLock for "default-k8s-diff-port-813397"
	I1027 19:42:33.256179  616341 start.go:96] Skipping create...Using existing machine configuration
	I1027 19:42:33.256186  616341 fix.go:54] fixHost starting: 
	I1027 19:42:33.256432  616341 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-813397 --format={{.State.Status}}
	I1027 19:42:33.274831  616341 fix.go:112] recreateIfNeeded on default-k8s-diff-port-813397: state=Stopped err=<nil>
	W1027 19:42:33.274866  616341 fix.go:138] unexpected machine state, will restart: <nil>
	I1027 19:42:32.886497  611121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:42:33.385998  611121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:42:33.885977  611121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:42:33.977454  611121 kubeadm.go:1113] duration metric: took 4.172132503s to wait for elevateKubeSystemPrivileges
	I1027 19:42:33.977594  611121 kubeadm.go:402] duration metric: took 15.032557693s to StartCluster
	I1027 19:42:33.977623  611121 settings.go:142] acquiring lock: {Name:mk8304c2106bf310642e0949fc0266ccb50f2f95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:42:33.977711  611121 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:42:33.979544  611121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/kubeconfig: {Name:mk24cbe512a6907c874f3fb7a85390a8f9fd2b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:42:33.979890  611121 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 19:42:33.979958  611121 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 19:42:33.980084  611121 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 19:42:33.980201  611121 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-677710"
	I1027 19:42:33.980221  611121 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-677710"
	I1027 19:42:33.980235  611121 config.go:182] Loaded profile config "newest-cni-677710": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:42:33.980266  611121 host.go:66] Checking if "newest-cni-677710" exists ...
	I1027 19:42:33.980313  611121 addons.go:69] Setting default-storageclass=true in profile "newest-cni-677710"
	I1027 19:42:33.980337  611121 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-677710"
	I1027 19:42:33.980715  611121 cli_runner.go:164] Run: docker container inspect newest-cni-677710 --format={{.State.Status}}
	I1027 19:42:33.980877  611121 cli_runner.go:164] Run: docker container inspect newest-cni-677710 --format={{.State.Status}}
	I1027 19:42:33.982020  611121 out.go:179] * Verifying Kubernetes components...
	I1027 19:42:33.984499  611121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:42:34.011780  611121 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:42:34.013344  611121 addons.go:238] Setting addon default-storageclass=true in "newest-cni-677710"
	I1027 19:42:34.013400  611121 host.go:66] Checking if "newest-cni-677710" exists ...
	I1027 19:42:34.013402  611121 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:42:34.013420  611121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 19:42:34.013487  611121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-677710
	I1027 19:42:34.014385  611121 cli_runner.go:164] Run: docker container inspect newest-cni-677710 --format={{.State.Status}}
	I1027 19:42:34.046765  611121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/newest-cni-677710/id_rsa Username:docker}
	I1027 19:42:34.054286  611121 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 19:42:34.054391  611121 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 19:42:34.054479  611121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-677710
	I1027 19:42:34.086751  611121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/newest-cni-677710/id_rsa Username:docker}
	I1027 19:42:34.105832  611121 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 19:42:34.155790  611121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:42:34.180884  611121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:42:34.212341  611121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 19:42:34.340378  611121 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1027 19:42:34.341554  611121 api_server.go:52] waiting for apiserver process to appear ...
	I1027 19:42:34.341630  611121 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:42:34.552625  611121 api_server.go:72] duration metric: took 572.613293ms to wait for apiserver process to appear ...
	I1027 19:42:34.552654  611121 api_server.go:88] waiting for apiserver healthz status ...
	I1027 19:42:34.552677  611121 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1027 19:42:34.559986  611121 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1027 19:42:34.563286  611121 api_server.go:141] control plane version: v1.34.1
	I1027 19:42:34.563316  611121 api_server.go:131] duration metric: took 10.654828ms to wait for apiserver health ...
	I1027 19:42:34.563326  611121 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 19:42:34.564458  611121 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1027 19:42:34.566324  611121 addons.go:514] duration metric: took 586.225241ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1027 19:42:34.571035  611121 system_pods.go:59] 8 kube-system pods found
	I1027 19:42:34.571078  611121 system_pods.go:61] "coredns-66bc5c9577-rv72d" [e5a10932-4bc9-46fc-920e-ead5c8e9b60b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1027 19:42:34.571092  611121 system_pods.go:61] "etcd-newest-cni-677710" [f4a7e071-86a0-40bb-a31a-8db8b73950cf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 19:42:34.571105  611121 system_pods.go:61] "kindnet-w6m47" [e1b6e2a6-b271-4a01-8cfe-c10f73bd2f4d] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1027 19:42:34.571116  611121 system_pods.go:61] "kube-apiserver-newest-cni-677710" [ee755cc7-7067-47ba-b521-590f8a5bfca3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 19:42:34.571126  611121 system_pods.go:61] "kube-controller-manager-newest-cni-677710" [e2534c35-6228-4ef5-8d6e-8d48cbe0e9e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 19:42:34.571167  611121 system_pods.go:61] "kube-proxy-zg8ds" [89658cd8-0d1d-4a33-a913-add5cbd50df0] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1027 19:42:34.571176  611121 system_pods.go:61] "kube-scheduler-newest-cni-677710" [6324c705-2fd3-40db-b475-3c077531b1a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 19:42:34.571184  611121 system_pods.go:61] "storage-provisioner" [5f120e58-40b3-4814-9025-3a7bc86197ab] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1027 19:42:34.571194  611121 system_pods.go:74] duration metric: took 7.859799ms to wait for pod list to return data ...
	I1027 19:42:34.571205  611121 default_sa.go:34] waiting for default service account to be created ...
	I1027 19:42:34.574444  611121 default_sa.go:45] found service account: "default"
	I1027 19:42:34.574484  611121 default_sa.go:55] duration metric: took 3.269705ms for default service account to be created ...
	I1027 19:42:34.574502  611121 kubeadm.go:586] duration metric: took 594.496871ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1027 19:42:34.574525  611121 node_conditions.go:102] verifying NodePressure condition ...
	I1027 19:42:34.577783  611121 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1027 19:42:34.577813  611121 node_conditions.go:123] node cpu capacity is 8
	I1027 19:42:34.577828  611121 node_conditions.go:105] duration metric: took 3.297767ms to run NodePressure ...
	I1027 19:42:34.577842  611121 start.go:241] waiting for startup goroutines ...
	I1027 19:42:34.844625  611121 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-677710" context rescaled to 1 replicas
	I1027 19:42:34.844653  611121 start.go:246] waiting for cluster config update ...
	I1027 19:42:34.844666  611121 start.go:255] writing updated cluster config ...
	I1027 19:42:34.845011  611121 ssh_runner.go:195] Run: rm -f paused
	I1027 19:42:34.903467  611121 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 19:42:34.905349  611121 out.go:179] * Done! kubectl is now configured to use "newest-cni-677710" cluster and "default" namespace by default
	I1027 19:42:33.462205  565798 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1027 19:42:33.462679  565798 api_server.go:269] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": dial tcp 192.168.103.2:8443: connect: connection refused
	I1027 19:42:33.462744  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1027 19:42:33.462802  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1027 19:42:33.493229  565798 cri.go:89] found id: "ca67cda12e0adb415e229ae9e136a15743c92bb79ef8987bb33523c43775a99e"
	I1027 19:42:33.493253  565798 cri.go:89] found id: ""
	I1027 19:42:33.493262  565798 logs.go:282] 1 containers: [ca67cda12e0adb415e229ae9e136a15743c92bb79ef8987bb33523c43775a99e]
	I1027 19:42:33.493314  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:42:33.497447  565798 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1027 19:42:33.497523  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1027 19:42:33.527781  565798 cri.go:89] found id: ""
	I1027 19:42:33.527812  565798 logs.go:282] 0 containers: []
	W1027 19:42:33.527823  565798 logs.go:284] No container was found matching "etcd"
	I1027 19:42:33.527831  565798 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1027 19:42:33.527883  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1027 19:42:33.562474  565798 cri.go:89] found id: ""
	I1027 19:42:33.562504  565798 logs.go:282] 0 containers: []
	W1027 19:42:33.562514  565798 logs.go:284] No container was found matching "coredns"
	I1027 19:42:33.562522  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1027 19:42:33.562569  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1027 19:42:33.597935  565798 cri.go:89] found id: "15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:42:33.597957  565798 cri.go:89] found id: ""
	I1027 19:42:33.597968  565798 logs.go:282] 1 containers: [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8]
	I1027 19:42:33.598031  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:42:33.602236  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1027 19:42:33.602306  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1027 19:42:33.631919  565798 cri.go:89] found id: ""
	I1027 19:42:33.631949  565798 logs.go:282] 0 containers: []
	W1027 19:42:33.631960  565798 logs.go:284] No container was found matching "kube-proxy"
	I1027 19:42:33.631968  565798 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1027 19:42:33.632030  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1027 19:42:33.662333  565798 cri.go:89] found id: "4b0186426a494845ce9fa7af7755d0c2f9549f935b11a34bd738219dd3bfd4f5"
	I1027 19:42:33.662361  565798 cri.go:89] found id: ""
	I1027 19:42:33.662375  565798 logs.go:282] 1 containers: [4b0186426a494845ce9fa7af7755d0c2f9549f935b11a34bd738219dd3bfd4f5]
	I1027 19:42:33.662437  565798 ssh_runner.go:195] Run: which crictl
	I1027 19:42:33.666731  565798 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1027 19:42:33.666814  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1027 19:42:33.694939  565798 cri.go:89] found id: ""
	I1027 19:42:33.694962  565798 logs.go:282] 0 containers: []
	W1027 19:42:33.694970  565798 logs.go:284] No container was found matching "kindnet"
	I1027 19:42:33.694978  565798 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1027 19:42:33.695030  565798 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1027 19:42:33.731061  565798 cri.go:89] found id: ""
	I1027 19:42:33.731090  565798 logs.go:282] 0 containers: []
	W1027 19:42:33.731101  565798 logs.go:284] No container was found matching "storage-provisioner"
	I1027 19:42:33.731113  565798 logs.go:123] Gathering logs for kube-apiserver [ca67cda12e0adb415e229ae9e136a15743c92bb79ef8987bb33523c43775a99e] ...
	I1027 19:42:33.731164  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ca67cda12e0adb415e229ae9e136a15743c92bb79ef8987bb33523c43775a99e"
	I1027 19:42:33.766344  565798 logs.go:123] Gathering logs for kube-scheduler [15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8] ...
	I1027 19:42:33.766395  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 15dd5243f3c3077492f810bb32a95efb0ed2898467909d2a4466a09e147eeaa8"
	I1027 19:42:33.845064  565798 logs.go:123] Gathering logs for kube-controller-manager [4b0186426a494845ce9fa7af7755d0c2f9549f935b11a34bd738219dd3bfd4f5] ...
	I1027 19:42:33.845110  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4b0186426a494845ce9fa7af7755d0c2f9549f935b11a34bd738219dd3bfd4f5"
	I1027 19:42:33.888734  565798 logs.go:123] Gathering logs for CRI-O ...
	I1027 19:42:33.888770  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1027 19:42:33.961756  565798 logs.go:123] Gathering logs for container status ...
	I1027 19:42:33.961800  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1027 19:42:34.008024  565798 logs.go:123] Gathering logs for kubelet ...
	I1027 19:42:34.008068  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1027 19:42:34.169342  565798 logs.go:123] Gathering logs for dmesg ...
	I1027 19:42:34.169403  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1027 19:42:34.195362  565798 logs.go:123] Gathering logs for describe nodes ...
	I1027 19:42:34.195402  565798 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1027 19:42:34.291353  565798 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1027 19:42:33.276804  616341 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-813397" ...
	I1027 19:42:33.276899  616341 cli_runner.go:164] Run: docker start default-k8s-diff-port-813397
	I1027 19:42:33.545589  616341 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-813397 --format={{.State.Status}}
	I1027 19:42:33.567928  616341 kic.go:430] container "default-k8s-diff-port-813397" state is running.
	I1027 19:42:33.568406  616341 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-813397
	I1027 19:42:33.590848  616341 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/default-k8s-diff-port-813397/config.json ...
	I1027 19:42:33.591193  616341 machine.go:93] provisionDockerMachine start ...
	I1027 19:42:33.591281  616341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-813397
	I1027 19:42:33.613764  616341 main.go:141] libmachine: Using SSH client type: native
	I1027 19:42:33.614098  616341 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33465 <nil> <nil>}
	I1027 19:42:33.614117  616341 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 19:42:33.614803  616341 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43962->127.0.0.1:33465: read: connection reset by peer
	I1027 19:42:36.771524  616341 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-813397
	
	I1027 19:42:36.771559  616341 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-813397"
	I1027 19:42:36.771634  616341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-813397
	I1027 19:42:36.791707  616341 main.go:141] libmachine: Using SSH client type: native
	I1027 19:42:36.791922  616341 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33465 <nil> <nil>}
	I1027 19:42:36.791935  616341 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-813397 && echo "default-k8s-diff-port-813397" | sudo tee /etc/hostname
	I1027 19:42:36.957110  616341 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-813397
	
	I1027 19:42:36.957277  616341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-813397
	I1027 19:42:36.984245  616341 main.go:141] libmachine: Using SSH client type: native
	I1027 19:42:36.984596  616341 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33465 <nil> <nil>}
	I1027 19:42:36.984630  616341 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-813397' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-813397/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-813397' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 19:42:37.143595  616341 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 19:42:37.143634  616341 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21801-352833/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-352833/.minikube}
	I1027 19:42:37.143678  616341 ubuntu.go:190] setting up certificates
	I1027 19:42:37.143694  616341 provision.go:84] configureAuth start
	I1027 19:42:37.143768  616341 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-813397
	I1027 19:42:37.166158  616341 provision.go:143] copyHostCerts
	I1027 19:42:37.166246  616341 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem, removing ...
	I1027 19:42:37.166264  616341 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem
	I1027 19:42:37.166348  616341 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem (1078 bytes)
	I1027 19:42:37.166487  616341 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem, removing ...
	I1027 19:42:37.166497  616341 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem
	I1027 19:42:37.166542  616341 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem (1123 bytes)
	I1027 19:42:37.166632  616341 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem, removing ...
	I1027 19:42:37.166641  616341 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem
	I1027 19:42:37.166678  616341 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem (1679 bytes)
	I1027 19:42:37.166759  616341 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-813397 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-813397 localhost minikube]
	I1027 19:42:37.289542  616341 provision.go:177] copyRemoteCerts
	I1027 19:42:37.289615  616341 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 19:42:37.289680  616341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-813397
	I1027 19:42:37.312726  616341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33465 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/default-k8s-diff-port-813397/id_rsa Username:docker}
	I1027 19:42:37.424842  616341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 19:42:37.452310  616341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1027 19:42:37.485784  616341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1027 19:42:37.518929  616341 provision.go:87] duration metric: took 375.19895ms to configureAuth
	I1027 19:42:37.518953  616341 ubuntu.go:206] setting minikube options for container-runtime
	I1027 19:42:37.519334  616341 config.go:182] Loaded profile config "default-k8s-diff-port-813397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:42:37.519445  616341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-813397
	I1027 19:42:37.542570  616341 main.go:141] libmachine: Using SSH client type: native
	I1027 19:42:37.542886  616341 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33465 <nil> <nil>}
	I1027 19:42:37.542915  616341 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 19:42:37.868284  616341 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 19:42:37.868313  616341 machine.go:96] duration metric: took 4.277105247s to provisionDockerMachine
	I1027 19:42:37.868329  616341 start.go:293] postStartSetup for "default-k8s-diff-port-813397" (driver="docker")
	I1027 19:42:37.868344  616341 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 19:42:37.868417  616341 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 19:42:37.868479  616341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-813397
	I1027 19:42:37.891258  616341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33465 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/default-k8s-diff-port-813397/id_rsa Username:docker}
	I1027 19:42:37.995548  616341 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 19:42:37.999926  616341 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 19:42:37.999959  616341 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 19:42:37.999973  616341 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-352833/.minikube/addons for local assets ...
	I1027 19:42:38.000029  616341 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-352833/.minikube/files for local assets ...
	I1027 19:42:38.000163  616341 filesync.go:149] local asset: /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem -> 3564152.pem in /etc/ssl/certs
	I1027 19:42:38.000407  616341 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 19:42:38.009425  616341 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem --> /etc/ssl/certs/3564152.pem (1708 bytes)
	I1027 19:42:38.028514  616341 start.go:296] duration metric: took 160.164183ms for postStartSetup
	I1027 19:42:38.028589  616341 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 19:42:38.028631  616341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-813397
	
	
	==> CRI-O <==
	Oct 27 19:42:12 no-preload-095885 crio[562]: time="2025-10-27T19:42:12.759978662Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:42:12 no-preload-095885 crio[562]: time="2025-10-27T19:42:12.760617487Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:42:12 no-preload-095885 crio[562]: time="2025-10-27T19:42:12.93657649Z" level=info msg="Created container 1784845152b15e895d191df6003d8e4505e0deb1eb12dca53fdb508d01a0c382: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-v74pg/dashboard-metrics-scraper" id=0f7390d2-a3ed-4faf-863e-0d97c12fd79e name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:42:12 no-preload-095885 crio[562]: time="2025-10-27T19:42:12.937434547Z" level=info msg="Starting container: 1784845152b15e895d191df6003d8e4505e0deb1eb12dca53fdb508d01a0c382" id=940ab09c-4321-430f-9580-a208f2cc0eb6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:42:12 no-preload-095885 crio[562]: time="2025-10-27T19:42:12.940305865Z" level=info msg="Started container" PID=1732 containerID=1784845152b15e895d191df6003d8e4505e0deb1eb12dca53fdb508d01a0c382 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-v74pg/dashboard-metrics-scraper id=940ab09c-4321-430f-9580-a208f2cc0eb6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fd4e5435c230028ac18f9956b8e0f900af19cdd21a0788ccb18d07b4fef883d4
	Oct 27 19:42:13 no-preload-095885 crio[562]: time="2025-10-27T19:42:13.725571092Z" level=info msg="Removing container: 11b3a2def7cd16147c055529bb6d7e829e50e30c726e7f0d9fa487ee900d163a" id=5dc528a1-badc-4753-b111-6f8d23bcb8bd name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 19:42:13 no-preload-095885 crio[562]: time="2025-10-27T19:42:13.749645263Z" level=info msg="Removed container 11b3a2def7cd16147c055529bb6d7e829e50e30c726e7f0d9fa487ee900d163a: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-v74pg/dashboard-metrics-scraper" id=5dc528a1-badc-4753-b111-6f8d23bcb8bd name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 19:42:14 no-preload-095885 crio[562]: time="2025-10-27T19:42:14.728651253Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=5e48368f-7a82-4c55-a33c-de9a44cdec34 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:42:14 no-preload-095885 crio[562]: time="2025-10-27T19:42:14.729909092Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8d7a92c4-d1a4-435c-8119-8bfee147dfb5 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:42:14 no-preload-095885 crio[562]: time="2025-10-27T19:42:14.731986366Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=a4ce08ee-6574-458a-913c-deef53479d64 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:42:14 no-preload-095885 crio[562]: time="2025-10-27T19:42:14.732125975Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:42:14 no-preload-095885 crio[562]: time="2025-10-27T19:42:14.738360832Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:42:14 no-preload-095885 crio[562]: time="2025-10-27T19:42:14.738586993Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e38a0f4fc20d1bdd18cce94d58edc6c20f5d27eecb1a020c291b1e3c01dd01d9/merged/etc/passwd: no such file or directory"
	Oct 27 19:42:14 no-preload-095885 crio[562]: time="2025-10-27T19:42:14.73862605Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e38a0f4fc20d1bdd18cce94d58edc6c20f5d27eecb1a020c291b1e3c01dd01d9/merged/etc/group: no such file or directory"
	Oct 27 19:42:14 no-preload-095885 crio[562]: time="2025-10-27T19:42:14.738955394Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:42:14 no-preload-095885 crio[562]: time="2025-10-27T19:42:14.776420554Z" level=info msg="Created container 90f66b7e123c368c03fba5eba51565bbc9522c44deaa2e2decbf48428f0a1e87: kube-system/storage-provisioner/storage-provisioner" id=a4ce08ee-6574-458a-913c-deef53479d64 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:42:14 no-preload-095885 crio[562]: time="2025-10-27T19:42:14.777377101Z" level=info msg="Starting container: 90f66b7e123c368c03fba5eba51565bbc9522c44deaa2e2decbf48428f0a1e87" id=9f80b1b8-f4b4-4f25-ad1c-266a1c9c4658 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:42:14 no-preload-095885 crio[562]: time="2025-10-27T19:42:14.779583993Z" level=info msg="Started container" PID=1746 containerID=90f66b7e123c368c03fba5eba51565bbc9522c44deaa2e2decbf48428f0a1e87 description=kube-system/storage-provisioner/storage-provisioner id=9f80b1b8-f4b4-4f25-ad1c-266a1c9c4658 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c0a24a317520bb34a78fb665332f5f4f86c8bbed7a4d6ff30ea8c98fc06d352b
	Oct 27 19:42:34 no-preload-095885 crio[562]: time="2025-10-27T19:42:34.591449938Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0d4802f9-4458-46ca-9118-2e1f45f9d427 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:42:34 no-preload-095885 crio[562]: time="2025-10-27T19:42:34.593081325Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e3fa69e6-f967-402d-abbd-4bb7efe520b7 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:42:34 no-preload-095885 crio[562]: time="2025-10-27T19:42:34.59421193Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-v74pg/dashboard-metrics-scraper" id=edc82ed4-598c-4956-8daf-2f706f9fab30 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:42:34 no-preload-095885 crio[562]: time="2025-10-27T19:42:34.594363797Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:42:34 no-preload-095885 crio[562]: time="2025-10-27T19:42:34.60227058Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:42:34 no-preload-095885 crio[562]: time="2025-10-27T19:42:34.602988482Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:42:34 no-preload-095885 crio[562]: time="2025-10-27T19:42:34.771694806Z" level=info msg="CreateCtr: context was either canceled or the deadline was exceeded: context canceled" id=edc82ed4-598c-4956-8daf-2f706f9fab30 name=/runtime.v1.RuntimeService/CreateContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	90f66b7e123c3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   c0a24a317520b       storage-provisioner                          kube-system
	1784845152b15       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago      Exited              dashboard-metrics-scraper   2                   fd4e5435c2300       dashboard-metrics-scraper-6ffb444bf9-v74pg   kubernetes-dashboard
	f095013b1fea3       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago      Running             kubernetes-dashboard        0                   5882654da0b5a       kubernetes-dashboard-855c9754f9-dqcbh        kubernetes-dashboard
	bdd012f57e645       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           55 seconds ago      Running             coredns                     0                   9b7d174ccc0c5       coredns-66bc5c9577-gwqvg                     kube-system
	9ff1aaf9ba79c       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   02750f01b1627       busybox                                      default
	dfd3413fa181f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   c0a24a317520b       storage-provisioner                          kube-system
	44fc145d6f991       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   e39e7607cd0f6       kindnet-8lbz5                                kube-system
	5697b5794786e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           55 seconds ago      Running             kube-proxy                  0                   b68433f12787b       kube-proxy-wz64m                             kube-system
	5cea35874d5ac       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           58 seconds ago      Running             etcd                        0                   d8e39758cf5a6       etcd-no-preload-095885                       kube-system
	6027c707b2e64       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           58 seconds ago      Running             kube-controller-manager     0                   9cf345ea4c97a       kube-controller-manager-no-preload-095885    kube-system
	b35fe833b6d52       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           58 seconds ago      Running             kube-scheduler              0                   222549238feed       kube-scheduler-no-preload-095885             kube-system
	781c3a34fe9cc       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           58 seconds ago      Running             kube-apiserver              0                   dda2211790c80       kube-apiserver-no-preload-095885             kube-system
	
	
	==> coredns [bdd012f57e645223267c73f71de660efe4e4214e579bda4ce609049f9287d78b] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37265 - 19078 "HINFO IN 2462662711656140191.3275751827932675110. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.082285381s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-095885
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-095885
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=no-preload-095885
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T19_40_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 19:40:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-095885
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 19:42:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 19:42:13 +0000   Mon, 27 Oct 2025 19:40:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 19:42:13 +0000   Mon, 27 Oct 2025 19:40:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 19:42:13 +0000   Mon, 27 Oct 2025 19:40:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 19:42:13 +0000   Mon, 27 Oct 2025 19:41:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-095885
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                71cd584e-1032-4c4b-a2da-7d2af7ed7a93
	  Boot ID:                    811bd29c-e64e-4acc-9427-bab1f7caed93
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-gwqvg                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 etcd-no-preload-095885                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-8lbz5                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-no-preload-095885              250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-no-preload-095885     200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-wz64m                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-no-preload-095885              100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-v74pg    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-dqcbh         0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 109s               kube-proxy       
	  Normal  Starting                 55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  115s               kubelet          Node no-preload-095885 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s               kubelet          Node no-preload-095885 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s               kubelet          Node no-preload-095885 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s               node-controller  Node no-preload-095885 event: Registered Node no-preload-095885 in Controller
	  Normal  NodeReady                97s                kubelet          Node no-preload-095885 status is now: NodeReady
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)  kubelet          Node no-preload-095885 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)  kubelet          Node no-preload-095885 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)  kubelet          Node no-preload-095885 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           53s                node-controller  Node no-preload-095885 event: Registered Node no-preload-095885 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 23 52 43 9a ba 08 06
	[  +0.000398] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 50 95 0e df 53 08 06
	[Oct27 18:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.017295] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023893] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023882] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023851] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +2.047849] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +4.031592] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +8.319143] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[ +16.382183] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[Oct27 19:00] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	
	
	==> etcd [5cea35874d5acf206b55e45b05f38d78ea9509d27b883c670c280fce93719392] <==
	{"level":"warn","ts":"2025-10-27T19:41:42.358072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.366704Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.377885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.394108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.401858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.410291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.416993Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.424033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.432463Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.443380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.451538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.458807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.480824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.489002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.498786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.507345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.515911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.525017Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.533727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.542418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.550349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.566910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.575891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.584174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:41:42.651905Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34156","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:42:39 up  2:25,  0 user,  load average: 4.53, 3.53, 2.27
	Linux no-preload-095885 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [44fc145d6f9918f3db309fd6e1b253a09d9c17767f2425460e6e412e11200fcf] <==
	I1027 19:41:44.160036       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 19:41:44.250261       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1027 19:41:44.250503       1 main.go:148] setting mtu 1500 for CNI 
	I1027 19:41:44.250530       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 19:41:44.250560       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T19:41:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 19:41:44.456185       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 19:41:44.456251       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 19:41:44.456269       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 19:41:44.456473       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1027 19:41:44.856435       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 19:41:44.856468       1 metrics.go:72] Registering metrics
	I1027 19:41:44.856542       1 controller.go:711] "Syncing nftables rules"
	I1027 19:41:54.456516       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 19:41:54.456612       1 main.go:301] handling current node
	I1027 19:42:04.456947       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 19:42:04.456997       1 main.go:301] handling current node
	I1027 19:42:14.456853       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 19:42:14.456924       1 main.go:301] handling current node
	I1027 19:42:24.456348       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 19:42:24.456394       1 main.go:301] handling current node
	I1027 19:42:34.464985       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1027 19:42:34.465023       1 main.go:301] handling current node
	
	
	==> kube-apiserver [781c3a34fe9cc4350ebd3342ca9b66e12ce9f3e6795ee22c7d4ed1e31f9fcd7c] <==
	I1027 19:41:43.242632       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1027 19:41:43.242742       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1027 19:41:43.242760       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1027 19:41:43.242910       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1027 19:41:43.243726       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 19:41:43.242797       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1027 19:41:43.242889       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1027 19:41:43.243418       1 aggregator.go:171] initial CRD sync complete...
	I1027 19:41:43.244541       1 autoregister_controller.go:144] Starting autoregister controller
	I1027 19:41:43.244561       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 19:41:43.244569       1 cache.go:39] Caches are synced for autoregister controller
	I1027 19:41:43.253768       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 19:41:43.266156       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 19:41:43.275846       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 19:41:43.571321       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 19:41:43.598283       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 19:41:43.626463       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 19:41:43.679847       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 19:41:43.693316       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 19:41:43.761114       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.87.219"}
	I1027 19:41:43.773862       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.53.143"}
	I1027 19:41:44.149906       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 19:41:46.730056       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 19:41:46.832791       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 19:41:46.880377       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [6027c707b2e6435987becfbc61cef802217623f703bccb12bb5716bc98c873a9] <==
	I1027 19:41:46.377008       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 19:41:46.376994       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 19:41:46.377055       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1027 19:41:46.377127       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1027 19:41:46.377244       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1027 19:41:46.377164       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1027 19:41:46.377668       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1027 19:41:46.377739       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1027 19:41:46.377670       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 19:41:46.377994       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 19:41:46.378020       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 19:41:46.378106       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 19:41:46.378116       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1027 19:41:46.379945       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1027 19:41:46.380328       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 19:41:46.383194       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 19:41:46.383194       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 19:41:46.385359       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 19:41:46.385464       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 19:41:46.385563       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-095885"
	I1027 19:41:46.385589       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 19:41:46.385602       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1027 19:41:46.388840       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 19:41:46.388862       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1027 19:41:46.405092       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [5697b5794786ef7e3e2b6adc476b065be3213886077b1efb7ec8a11a1893a554] <==
	I1027 19:41:43.998567       1 server_linux.go:53] "Using iptables proxy"
	I1027 19:41:44.062148       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 19:41:44.163237       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 19:41:44.163348       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1027 19:41:44.163470       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 19:41:44.185514       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 19:41:44.185572       1 server_linux.go:132] "Using iptables Proxier"
	I1027 19:41:44.191021       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 19:41:44.191512       1 server.go:527] "Version info" version="v1.34.1"
	I1027 19:41:44.191533       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:41:44.192818       1 config.go:200] "Starting service config controller"
	I1027 19:41:44.192845       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 19:41:44.192885       1 config.go:106] "Starting endpoint slice config controller"
	I1027 19:41:44.192893       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 19:41:44.192910       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 19:41:44.192923       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 19:41:44.192947       1 config.go:309] "Starting node config controller"
	I1027 19:41:44.192952       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 19:41:44.192958       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 19:41:44.293935       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 19:41:44.293956       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 19:41:44.293984       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [b35fe833b6d5250c5b516a89c49b8f3808e23967fa3f1a0150b2cd20ac6d55ea] <==
	I1027 19:41:41.893417       1 serving.go:386] Generated self-signed cert in-memory
	W1027 19:41:43.170447       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1027 19:41:43.170516       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1027 19:41:43.170531       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1027 19:41:43.170541       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1027 19:41:43.215620       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 19:41:43.215746       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:41:43.219494       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:41:43.219601       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:41:43.220679       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 19:41:43.226488       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 19:41:43.320583       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 19:41:47 no-preload-095885 kubelet[701]: I1027 19:41:47.118286     701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdzs8\" (UniqueName: \"kubernetes.io/projected/0f07f163-c30e-4605-a6fa-68364ac4eff8-kube-api-access-cdzs8\") pod \"kubernetes-dashboard-855c9754f9-dqcbh\" (UID: \"0f07f163-c30e-4605-a6fa-68364ac4eff8\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dqcbh"
	Oct 27 19:41:47 no-preload-095885 kubelet[701]: I1027 19:41:47.118331     701 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9ff5d868-93bc-4049-b36c-99bc791224db-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-v74pg\" (UID: \"9ff5d868-93bc-4049-b36c-99bc791224db\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-v74pg"
	Oct 27 19:41:49 no-preload-095885 kubelet[701]: I1027 19:41:49.654317     701 scope.go:117] "RemoveContainer" containerID="c0550830245383e523b5340786789edb0df00da68110989fb49d9f4951f9f50a"
	Oct 27 19:41:50 no-preload-095885 kubelet[701]: I1027 19:41:50.659273     701 scope.go:117] "RemoveContainer" containerID="c0550830245383e523b5340786789edb0df00da68110989fb49d9f4951f9f50a"
	Oct 27 19:41:50 no-preload-095885 kubelet[701]: I1027 19:41:50.659441     701 scope.go:117] "RemoveContainer" containerID="11b3a2def7cd16147c055529bb6d7e829e50e30c726e7f0d9fa487ee900d163a"
	Oct 27 19:41:50 no-preload-095885 kubelet[701]: E1027 19:41:50.659675     701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-v74pg_kubernetes-dashboard(9ff5d868-93bc-4049-b36c-99bc791224db)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-v74pg" podUID="9ff5d868-93bc-4049-b36c-99bc791224db"
	Oct 27 19:41:50 no-preload-095885 kubelet[701]: I1027 19:41:50.817954     701 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 27 19:41:51 no-preload-095885 kubelet[701]: I1027 19:41:51.664216     701 scope.go:117] "RemoveContainer" containerID="11b3a2def7cd16147c055529bb6d7e829e50e30c726e7f0d9fa487ee900d163a"
	Oct 27 19:41:51 no-preload-095885 kubelet[701]: E1027 19:41:51.664380     701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-v74pg_kubernetes-dashboard(9ff5d868-93bc-4049-b36c-99bc791224db)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-v74pg" podUID="9ff5d868-93bc-4049-b36c-99bc791224db"
	Oct 27 19:41:53 no-preload-095885 kubelet[701]: I1027 19:41:53.681638     701 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dqcbh" podStartSLOduration=1.6106176890000001 podStartE2EDuration="7.68161494s" podCreationTimestamp="2025-10-27 19:41:46 +0000 UTC" firstStartedPulling="2025-10-27 19:41:47.330870363 +0000 UTC m=+6.886692489" lastFinishedPulling="2025-10-27 19:41:53.401867597 +0000 UTC m=+12.957689740" observedRunningTime="2025-10-27 19:41:53.681262807 +0000 UTC m=+13.237084956" watchObservedRunningTime="2025-10-27 19:41:53.68161494 +0000 UTC m=+13.237437088"
	Oct 27 19:41:59 no-preload-095885 kubelet[701]: I1027 19:41:59.232034     701 scope.go:117] "RemoveContainer" containerID="11b3a2def7cd16147c055529bb6d7e829e50e30c726e7f0d9fa487ee900d163a"
	Oct 27 19:41:59 no-preload-095885 kubelet[701]: E1027 19:41:59.232252     701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-v74pg_kubernetes-dashboard(9ff5d868-93bc-4049-b36c-99bc791224db)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-v74pg" podUID="9ff5d868-93bc-4049-b36c-99bc791224db"
	Oct 27 19:42:12 no-preload-095885 kubelet[701]: I1027 19:42:12.590580     701 scope.go:117] "RemoveContainer" containerID="11b3a2def7cd16147c055529bb6d7e829e50e30c726e7f0d9fa487ee900d163a"
	Oct 27 19:42:13 no-preload-095885 kubelet[701]: I1027 19:42:13.723676     701 scope.go:117] "RemoveContainer" containerID="11b3a2def7cd16147c055529bb6d7e829e50e30c726e7f0d9fa487ee900d163a"
	Oct 27 19:42:13 no-preload-095885 kubelet[701]: I1027 19:42:13.723923     701 scope.go:117] "RemoveContainer" containerID="1784845152b15e895d191df6003d8e4505e0deb1eb12dca53fdb508d01a0c382"
	Oct 27 19:42:13 no-preload-095885 kubelet[701]: E1027 19:42:13.724106     701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-v74pg_kubernetes-dashboard(9ff5d868-93bc-4049-b36c-99bc791224db)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-v74pg" podUID="9ff5d868-93bc-4049-b36c-99bc791224db"
	Oct 27 19:42:14 no-preload-095885 kubelet[701]: I1027 19:42:14.728191     701 scope.go:117] "RemoveContainer" containerID="dfd3413fa181f285a1eacee389efc4b492f13e6936b46ef5bb030474a125d597"
	Oct 27 19:42:19 no-preload-095885 kubelet[701]: I1027 19:42:19.232446     701 scope.go:117] "RemoveContainer" containerID="1784845152b15e895d191df6003d8e4505e0deb1eb12dca53fdb508d01a0c382"
	Oct 27 19:42:19 no-preload-095885 kubelet[701]: E1027 19:42:19.232637     701 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-v74pg_kubernetes-dashboard(9ff5d868-93bc-4049-b36c-99bc791224db)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-v74pg" podUID="9ff5d868-93bc-4049-b36c-99bc791224db"
	Oct 27 19:42:34 no-preload-095885 kubelet[701]: I1027 19:42:34.590881     701 scope.go:117] "RemoveContainer" containerID="1784845152b15e895d191df6003d8e4505e0deb1eb12dca53fdb508d01a0c382"
	Oct 27 19:42:34 no-preload-095885 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 19:42:34 no-preload-095885 kubelet[701]: I1027 19:42:34.750864     701 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 27 19:42:34 no-preload-095885 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 19:42:34 no-preload-095885 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 27 19:42:34 no-preload-095885 systemd[1]: kubelet.service: Consumed 1.872s CPU time.
	
	
	==> kubernetes-dashboard [f095013b1fea34f4a0e54b3bc41fce7b3914256c3abf5dbba3bc51f30acfb4d3] <==
	2025/10/27 19:41:53 Starting overwatch
	2025/10/27 19:41:53 Using namespace: kubernetes-dashboard
	2025/10/27 19:41:53 Using in-cluster config to connect to apiserver
	2025/10/27 19:41:53 Using secret token for csrf signing
	2025/10/27 19:41:53 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/27 19:41:53 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/27 19:41:53 Successful initial request to the apiserver, version: v1.34.1
	2025/10/27 19:41:53 Generating JWE encryption key
	2025/10/27 19:41:53 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/27 19:41:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/27 19:41:53 Initializing JWE encryption key from synchronized object
	2025/10/27 19:41:53 Creating in-cluster Sidecar client
	2025/10/27 19:41:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 19:41:53 Serving insecurely on HTTP port: 9090
	2025/10/27 19:42:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [90f66b7e123c368c03fba5eba51565bbc9522c44deaa2e2decbf48428f0a1e87] <==
	I1027 19:42:14.792065       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1027 19:42:14.801409       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 19:42:14.801477       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1027 19:42:14.803994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:42:18.259846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:42:22.520683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:42:26.120450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:42:29.174621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:42:32.197030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:42:32.202081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 19:42:32.202284       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 19:42:32.202454       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-095885_52ac8a0a-6502-4eec-889c-7b9d07620ffc!
	I1027 19:42:32.202526       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a17df180-0dc3-44e5-84d2-7fe25e687623", APIVersion:"v1", ResourceVersion:"626", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-095885_52ac8a0a-6502-4eec-889c-7b9d07620ffc became leader
	W1027 19:42:32.204729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:42:32.211052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 19:42:32.303525       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-095885_52ac8a0a-6502-4eec-889c-7b9d07620ffc!
	W1027 19:42:34.215838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:42:34.222439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:42:36.226310       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:42:36.231679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:42:38.234991       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:42:38.239398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [dfd3413fa181f285a1eacee389efc4b492f13e6936b46ef5bb030474a125d597] <==
	I1027 19:41:43.970217       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1027 19:42:13.972684       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-095885 -n no-preload-095885
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-095885 -n no-preload-095885: exit status 2 (368.773985ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-095885 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-677710 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-677710 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (270.855162ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:42:35Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-677710 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-677710
helpers_test.go:243: (dbg) docker inspect newest-cni-677710:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "62fa20be8510c822a3af68e19caac8c39efa6f456f35c096ab55d9be979a15a7",
	        "Created": "2025-10-27T19:42:13.174761527Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 612331,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T19:42:13.214044292Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/62fa20be8510c822a3af68e19caac8c39efa6f456f35c096ab55d9be979a15a7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/62fa20be8510c822a3af68e19caac8c39efa6f456f35c096ab55d9be979a15a7/hostname",
	        "HostsPath": "/var/lib/docker/containers/62fa20be8510c822a3af68e19caac8c39efa6f456f35c096ab55d9be979a15a7/hosts",
	        "LogPath": "/var/lib/docker/containers/62fa20be8510c822a3af68e19caac8c39efa6f456f35c096ab55d9be979a15a7/62fa20be8510c822a3af68e19caac8c39efa6f456f35c096ab55d9be979a15a7-json.log",
	        "Name": "/newest-cni-677710",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-677710:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-677710",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "62fa20be8510c822a3af68e19caac8c39efa6f456f35c096ab55d9be979a15a7",
	                "LowerDir": "/var/lib/docker/overlay2/cb56fe71dd86daf61eed2c8feacba9932a7ceba7713d274439236e8bf12ab0c5-init/diff:/var/lib/docker/overlay2/71b61ec94610a35f2d924dec358052d4c154c36b3fe219802f60246ca2dc7f45/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cb56fe71dd86daf61eed2c8feacba9932a7ceba7713d274439236e8bf12ab0c5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cb56fe71dd86daf61eed2c8feacba9932a7ceba7713d274439236e8bf12ab0c5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cb56fe71dd86daf61eed2c8feacba9932a7ceba7713d274439236e8bf12ab0c5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-677710",
	                "Source": "/var/lib/docker/volumes/newest-cni-677710/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-677710",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-677710",
	                "name.minikube.sigs.k8s.io": "newest-cni-677710",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4b7d43a94a95950a5ef2f5f78dc3078064fff30d6d4d4298afec568580cd64d5",
	            "SandboxKey": "/var/run/docker/netns/4b7d43a94a95",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-677710": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6a:31:80:f1:c4:46",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "de6d8a0af7735692f0f5ebcc3cc03e69c8662e213ca8fd268387cc9a0ddf92b8",
	                    "EndpointID": "3c9c0c989f0a40cc93779fe3ae24e68c764e48f33cbba65e891a316de9859f76",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-677710",
	                        "62fa20be8510"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-677710 -n newest-cni-677710
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-677710 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-677710 logs -n 25: (1.009285356s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ functional-051715 image save kicbase/echo-server:functional-051715 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr                                                                               │ functional-051715            │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ image   │ functional-051715 image rm kicbase/echo-server:functional-051715 --alsologtostderr                                                                                                                                                            │ functional-051715            │ jenkins │ v1.37.0 │ 27 Oct 25 19:04 UTC │ 27 Oct 25 19:04 UTC │
	│ addons  │ enable dashboard -p embed-certs-919237 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ start   │ -p embed-certs-919237 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ image   │ old-k8s-version-468959 image list --format=json                                                                                                                                                                                               │ old-k8s-version-468959       │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ pause   │ -p old-k8s-version-468959 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-468959       │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-095885 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │                     │
	│ stop    │ -p no-preload-095885 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ delete  │ -p old-k8s-version-468959                                                                                                                                                                                                                     │ old-k8s-version-468959       │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ delete  │ -p old-k8s-version-468959                                                                                                                                                                                                                     │ old-k8s-version-468959       │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ start   │ -p default-k8s-diff-port-813397 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-813397 │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:42 UTC │
	│ addons  │ enable dashboard -p no-preload-095885 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ start   │ -p no-preload-095885 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:42 UTC │
	│ image   │ embed-certs-919237 image list --format=json                                                                                                                                                                                                   │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ pause   │ -p embed-certs-919237 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │                     │
	│ delete  │ -p embed-certs-919237                                                                                                                                                                                                                         │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ delete  │ -p embed-certs-919237                                                                                                                                                                                                                         │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ start   │ -p newest-cni-677710 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-677710            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-813397 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-813397 │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-813397 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-813397 │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-813397 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-813397 │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ start   │ -p default-k8s-diff-port-813397 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-813397 │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	│ image   │ no-preload-095885 image list --format=json                                                                                                                                                                                                    │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ pause   │ -p no-preload-095885 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-677710 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-677710            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 19:42:33
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 19:42:33.033179  616341 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:42:33.033469  616341 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:42:33.033479  616341 out.go:374] Setting ErrFile to fd 2...
	I1027 19:42:33.033483  616341 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:42:33.033702  616341 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 19:42:33.034175  616341 out.go:368] Setting JSON to false
	I1027 19:42:33.035543  616341 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8702,"bootTime":1761585451,"procs":429,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 19:42:33.035662  616341 start.go:141] virtualization: kvm guest
	I1027 19:42:33.037878  616341 out.go:179] * [default-k8s-diff-port-813397] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 19:42:33.039545  616341 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 19:42:33.039579  616341 notify.go:220] Checking for updates...
	I1027 19:42:33.042285  616341 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 19:42:33.043786  616341 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:42:33.045229  616341 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-352833/.minikube
	I1027 19:42:33.046625  616341 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 19:42:33.048033  616341 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 19:42:33.049907  616341 config.go:182] Loaded profile config "default-k8s-diff-port-813397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:42:33.050564  616341 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 19:42:33.077018  616341 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 19:42:33.077130  616341 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:42:33.159675  616341 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-27 19:42:33.147250392 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:42:33.159849  616341 docker.go:318] overlay module found
	I1027 19:42:33.161916  616341 out.go:179] * Using the docker driver based on existing profile
	I1027 19:42:33.163412  616341 start.go:305] selected driver: docker
	I1027 19:42:33.163432  616341 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-813397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-813397 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:42:33.163533  616341 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 19:42:33.164108  616341 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:42:33.225747  616341 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-27 19:42:33.214711968 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:42:33.226122  616341 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 19:42:33.226193  616341 cni.go:84] Creating CNI manager for ""
	I1027 19:42:33.226284  616341 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:42:33.226391  616341 start.go:349] cluster config:
	{Name:default-k8s-diff-port-813397 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-813397 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:42:33.228420  616341 out.go:179] * Starting "default-k8s-diff-port-813397" primary control-plane node in "default-k8s-diff-port-813397" cluster
	I1027 19:42:33.229915  616341 cache.go:123] Beginning downloading kic base image for docker with crio
	I1027 19:42:33.231391  616341 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 19:42:33.232825  616341 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:42:33.232876  616341 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1027 19:42:33.232888  616341 cache.go:58] Caching tarball of preloaded images
	I1027 19:42:33.232950  616341 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 19:42:33.232994  616341 preload.go:233] Found /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1027 19:42:33.233005  616341 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 19:42:33.233127  616341 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/default-k8s-diff-port-813397/config.json ...
	I1027 19:42:33.255996  616341 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 19:42:33.256019  616341 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 19:42:33.256040  616341 cache.go:232] Successfully downloaded all kic artifacts
	I1027 19:42:33.256073  616341 start.go:360] acquireMachinesLock for default-k8s-diff-port-813397: {Name:mk62e4c852b8cd14691bbd6055f96686bc7465fc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:42:33.256154  616341 start.go:364] duration metric: took 59.384µs to acquireMachinesLock for "default-k8s-diff-port-813397"
	I1027 19:42:33.256179  616341 start.go:96] Skipping create...Using existing machine configuration
	I1027 19:42:33.256186  616341 fix.go:54] fixHost starting: 
	I1027 19:42:33.256432  616341 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-813397 --format={{.State.Status}}
	I1027 19:42:33.274831  616341 fix.go:112] recreateIfNeeded on default-k8s-diff-port-813397: state=Stopped err=<nil>
	W1027 19:42:33.274866  616341 fix.go:138] unexpected machine state, will restart: <nil>
	I1027 19:42:32.886497  611121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:42:33.385998  611121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:42:33.885977  611121 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:42:33.977454  611121 kubeadm.go:1113] duration metric: took 4.172132503s to wait for elevateKubeSystemPrivileges
	I1027 19:42:33.977594  611121 kubeadm.go:402] duration metric: took 15.032557693s to StartCluster
	I1027 19:42:33.977623  611121 settings.go:142] acquiring lock: {Name:mk8304c2106bf310642e0949fc0266ccb50f2f95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:42:33.977711  611121 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:42:33.979544  611121 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/kubeconfig: {Name:mk24cbe512a6907c874f3fb7a85390a8f9fd2b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:42:33.979890  611121 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 19:42:33.979958  611121 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 19:42:33.980084  611121 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 19:42:33.980201  611121 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-677710"
	I1027 19:42:33.980221  611121 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-677710"
	I1027 19:42:33.980235  611121 config.go:182] Loaded profile config "newest-cni-677710": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:42:33.980266  611121 host.go:66] Checking if "newest-cni-677710" exists ...
	I1027 19:42:33.980313  611121 addons.go:69] Setting default-storageclass=true in profile "newest-cni-677710"
	I1027 19:42:33.980337  611121 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-677710"
	I1027 19:42:33.980715  611121 cli_runner.go:164] Run: docker container inspect newest-cni-677710 --format={{.State.Status}}
	I1027 19:42:33.980877  611121 cli_runner.go:164] Run: docker container inspect newest-cni-677710 --format={{.State.Status}}
	I1027 19:42:33.982020  611121 out.go:179] * Verifying Kubernetes components...
	I1027 19:42:33.984499  611121 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:42:34.011780  611121 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:42:34.013344  611121 addons.go:238] Setting addon default-storageclass=true in "newest-cni-677710"
	I1027 19:42:34.013400  611121 host.go:66] Checking if "newest-cni-677710" exists ...
	I1027 19:42:34.013402  611121 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:42:34.013420  611121 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 19:42:34.013487  611121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-677710
	I1027 19:42:34.014385  611121 cli_runner.go:164] Run: docker container inspect newest-cni-677710 --format={{.State.Status}}
	I1027 19:42:34.046765  611121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/newest-cni-677710/id_rsa Username:docker}
	I1027 19:42:34.054286  611121 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 19:42:34.054391  611121 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 19:42:34.054479  611121 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-677710
	I1027 19:42:34.086751  611121 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33460 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/newest-cni-677710/id_rsa Username:docker}
	I1027 19:42:34.105832  611121 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 19:42:34.155790  611121 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:42:34.180884  611121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:42:34.212341  611121 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 19:42:34.340378  611121 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1027 19:42:34.341554  611121 api_server.go:52] waiting for apiserver process to appear ...
	I1027 19:42:34.341630  611121 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:42:34.552625  611121 api_server.go:72] duration metric: took 572.613293ms to wait for apiserver process to appear ...
	I1027 19:42:34.552654  611121 api_server.go:88] waiting for apiserver healthz status ...
	I1027 19:42:34.552677  611121 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1027 19:42:34.559986  611121 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1027 19:42:34.563286  611121 api_server.go:141] control plane version: v1.34.1
	I1027 19:42:34.563316  611121 api_server.go:131] duration metric: took 10.654828ms to wait for apiserver health ...
	I1027 19:42:34.563326  611121 system_pods.go:43] waiting for kube-system pods to appear ...
	I1027 19:42:34.564458  611121 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1027 19:42:34.566324  611121 addons.go:514] duration metric: took 586.225241ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1027 19:42:34.571035  611121 system_pods.go:59] 8 kube-system pods found
	I1027 19:42:34.571078  611121 system_pods.go:61] "coredns-66bc5c9577-rv72d" [e5a10932-4bc9-46fc-920e-ead5c8e9b60b] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1027 19:42:34.571092  611121 system_pods.go:61] "etcd-newest-cni-677710" [f4a7e071-86a0-40bb-a31a-8db8b73950cf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1027 19:42:34.571105  611121 system_pods.go:61] "kindnet-w6m47" [e1b6e2a6-b271-4a01-8cfe-c10f73bd2f4d] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1027 19:42:34.571116  611121 system_pods.go:61] "kube-apiserver-newest-cni-677710" [ee755cc7-7067-47ba-b521-590f8a5bfca3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1027 19:42:34.571126  611121 system_pods.go:61] "kube-controller-manager-newest-cni-677710" [e2534c35-6228-4ef5-8d6e-8d48cbe0e9e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1027 19:42:34.571167  611121 system_pods.go:61] "kube-proxy-zg8ds" [89658cd8-0d1d-4a33-a913-add5cbd50df0] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1027 19:42:34.571176  611121 system_pods.go:61] "kube-scheduler-newest-cni-677710" [6324c705-2fd3-40db-b475-3c077531b1a5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1027 19:42:34.571184  611121 system_pods.go:61] "storage-provisioner" [5f120e58-40b3-4814-9025-3a7bc86197ab] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1027 19:42:34.571194  611121 system_pods.go:74] duration metric: took 7.859799ms to wait for pod list to return data ...
	I1027 19:42:34.571205  611121 default_sa.go:34] waiting for default service account to be created ...
	I1027 19:42:34.574444  611121 default_sa.go:45] found service account: "default"
	I1027 19:42:34.574484  611121 default_sa.go:55] duration metric: took 3.269705ms for default service account to be created ...
	I1027 19:42:34.574502  611121 kubeadm.go:586] duration metric: took 594.496871ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1027 19:42:34.574525  611121 node_conditions.go:102] verifying NodePressure condition ...
	I1027 19:42:34.577783  611121 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1027 19:42:34.577813  611121 node_conditions.go:123] node cpu capacity is 8
	I1027 19:42:34.577828  611121 node_conditions.go:105] duration metric: took 3.297767ms to run NodePressure ...
	I1027 19:42:34.577842  611121 start.go:241] waiting for startup goroutines ...
	I1027 19:42:34.844625  611121 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-677710" context rescaled to 1 replicas
	I1027 19:42:34.844653  611121 start.go:246] waiting for cluster config update ...
	I1027 19:42:34.844666  611121 start.go:255] writing updated cluster config ...
	I1027 19:42:34.845011  611121 ssh_runner.go:195] Run: rm -f paused
	I1027 19:42:34.903467  611121 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 19:42:34.905349  611121 out.go:179] * Done! kubectl is now configured to use "newest-cni-677710" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 27 19:42:34 newest-cni-677710 crio[776]: time="2025-10-27T19:42:34.559360326Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 27 19:42:34 newest-cni-677710 crio[776]: time="2025-10-27T19:42:34.560532788Z" level=info msg="Ran pod sandbox 1127ab1c32df20aed68ac9a66cff39c8e09e903653d25b81218d94251b9f7ab5 with infra container: kube-system/kindnet-w6m47/POD" id=b6edac3c-fd9f-4028-a18e-684894a870a4 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 19:42:34 newest-cni-677710 crio[776]: time="2025-10-27T19:42:34.560546514Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-zg8ds/POD" id=082ddfca-1b2b-42dd-a34a-6bf681641c7a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 19:42:34 newest-cni-677710 crio[776]: time="2025-10-27T19:42:34.560714025Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:42:34 newest-cni-677710 crio[776]: time="2025-10-27T19:42:34.561825092Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=8b854bd0-1fac-46b5-af4b-e788c927d1bb name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:42:34 newest-cni-677710 crio[776]: time="2025-10-27T19:42:34.564208868Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=082ddfca-1b2b-42dd-a34a-6bf681641c7a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 19:42:34 newest-cni-677710 crio[776]: time="2025-10-27T19:42:34.565988322Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=6c19c75d-6c6e-4f31-923f-179f57af3a10 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:42:34 newest-cni-677710 crio[776]: time="2025-10-27T19:42:34.566520467Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 27 19:42:34 newest-cni-677710 crio[776]: time="2025-10-27T19:42:34.567710482Z" level=info msg="Ran pod sandbox 60b524437fb88c606bc54451fcf1fecd18b9daf0a3c65e1da3443874d4dff654 with infra container: kube-system/kube-proxy-zg8ds/POD" id=082ddfca-1b2b-42dd-a34a-6bf681641c7a name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 19:42:34 newest-cni-677710 crio[776]: time="2025-10-27T19:42:34.570930009Z" level=info msg="Creating container: kube-system/kindnet-w6m47/kindnet-cni" id=52ab7a63-7330-478f-a71f-f1d05bd2c84c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:42:34 newest-cni-677710 crio[776]: time="2025-10-27T19:42:34.57105722Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:42:34 newest-cni-677710 crio[776]: time="2025-10-27T19:42:34.572928186Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=d913adcf-7bd1-4b0b-a359-e2c3c7c48056 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:42:34 newest-cni-677710 crio[776]: time="2025-10-27T19:42:34.575534862Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=6c9cba37-57e5-4877-bb01-18ac700f1b2d name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:42:34 newest-cni-677710 crio[776]: time="2025-10-27T19:42:34.575881574Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:42:34 newest-cni-677710 crio[776]: time="2025-10-27T19:42:34.576681591Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:42:34 newest-cni-677710 crio[776]: time="2025-10-27T19:42:34.57937922Z" level=info msg="Creating container: kube-system/kube-proxy-zg8ds/kube-proxy" id=36247851-05d8-4819-9ff1-0fccf0c9adfe name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:42:34 newest-cni-677710 crio[776]: time="2025-10-27T19:42:34.579548229Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:42:34 newest-cni-677710 crio[776]: time="2025-10-27T19:42:34.585341043Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:42:34 newest-cni-677710 crio[776]: time="2025-10-27T19:42:34.586007548Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:42:34 newest-cni-677710 crio[776]: time="2025-10-27T19:42:34.597713759Z" level=info msg="Created container a63d140e368bd2540b11b1fbd31924bec72850f70f2141ff1764ef6771f6e323: kube-system/kindnet-w6m47/kindnet-cni" id=52ab7a63-7330-478f-a71f-f1d05bd2c84c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:42:34 newest-cni-677710 crio[776]: time="2025-10-27T19:42:34.599715454Z" level=info msg="Starting container: a63d140e368bd2540b11b1fbd31924bec72850f70f2141ff1764ef6771f6e323" id=7d0c4751-f8b2-4d33-837b-7492ae318bf1 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:42:34 newest-cni-677710 crio[776]: time="2025-10-27T19:42:34.602283558Z" level=info msg="Started container" PID=1600 containerID=a63d140e368bd2540b11b1fbd31924bec72850f70f2141ff1764ef6771f6e323 description=kube-system/kindnet-w6m47/kindnet-cni id=7d0c4751-f8b2-4d33-837b-7492ae318bf1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1127ab1c32df20aed68ac9a66cff39c8e09e903653d25b81218d94251b9f7ab5
	Oct 27 19:42:34 newest-cni-677710 crio[776]: time="2025-10-27T19:42:34.617444487Z" level=info msg="Created container 0713ea12eb39d32b86d085a3580da6f31d3f0c868537d4962fbee222f7b1318f: kube-system/kube-proxy-zg8ds/kube-proxy" id=36247851-05d8-4819-9ff1-0fccf0c9adfe name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:42:34 newest-cni-677710 crio[776]: time="2025-10-27T19:42:34.618256093Z" level=info msg="Starting container: 0713ea12eb39d32b86d085a3580da6f31d3f0c868537d4962fbee222f7b1318f" id=0ac6c21f-e641-4656-b599-a4ecc4b8c5a2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:42:34 newest-cni-677710 crio[776]: time="2025-10-27T19:42:34.622855745Z" level=info msg="Started container" PID=1605 containerID=0713ea12eb39d32b86d085a3580da6f31d3f0c868537d4962fbee222f7b1318f description=kube-system/kube-proxy-zg8ds/kube-proxy id=0ac6c21f-e641-4656-b599-a4ecc4b8c5a2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=60b524437fb88c606bc54451fcf1fecd18b9daf0a3c65e1da3443874d4dff654
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	0713ea12eb39d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   1 second ago        Running             kube-proxy                0                   60b524437fb88       kube-proxy-zg8ds                            kube-system
	a63d140e368bd       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   1 second ago        Running             kindnet-cni               0                   1127ab1c32df2       kindnet-w6m47                               kube-system
	22dc71c054d4b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   11 seconds ago      Running             etcd                      0                   97495f2b76081       etcd-newest-cni-677710                      kube-system
	5498f7504e0e9       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   11 seconds ago      Running             kube-controller-manager   0                   b03e945135bb4       kube-controller-manager-newest-cni-677710   kube-system
	23a2a54bea863       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   11 seconds ago      Running             kube-scheduler            0                   ad6f1fdef70fa       kube-scheduler-newest-cni-677710            kube-system
	4e3179586698b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   11 seconds ago      Running             kube-apiserver            0                   6538f6bcf2cda       kube-apiserver-newest-cni-677710            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-677710
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-677710
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=newest-cni-677710
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T19_42_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 19:42:26 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-677710
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 19:42:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 19:42:28 +0000   Mon, 27 Oct 2025 19:42:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 19:42:28 +0000   Mon, 27 Oct 2025 19:42:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 19:42:28 +0000   Mon, 27 Oct 2025 19:42:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 27 Oct 2025 19:42:28 +0000   Mon, 27 Oct 2025 19:42:24 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-677710
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                0e5b8dde-ff0d-4017-8bb0-5ec4905459bd
	  Boot ID:                    811bd29c-e64e-4acc-9427-bab1f7caed93
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-677710                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8s
	  kube-system                 kindnet-w6m47                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-677710             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-677710    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-zg8ds                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-677710             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 1s    kube-proxy       
	  Normal  Starting                 8s    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s    kubelet          Node newest-cni-677710 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s    kubelet          Node newest-cni-677710 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s    kubelet          Node newest-cni-677710 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s    node-controller  Node newest-cni-677710 event: Registered Node newest-cni-677710 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 23 52 43 9a ba 08 06
	[  +0.000398] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 50 95 0e df 53 08 06
	[Oct27 18:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.017295] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023893] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023882] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023851] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +2.047849] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +4.031592] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +8.319143] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[ +16.382183] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[Oct27 19:00] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	
	
	==> etcd [22dc71c054d4b5c9f42612fe16cfc3fe21a81d9f9f5c25ccbda2a4cb42827296] <==
	{"level":"warn","ts":"2025-10-27T19:42:25.371935Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:25.388578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33940","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:25.395796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:25.403758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:25.411184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:25.418391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:25.430647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:25.437800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:25.444189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:25.453237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:25.459306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:25.466488Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:25.472569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:25.479679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:25.486885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:25.494283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:25.501554Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:25.507992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:25.515458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:25.522673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:25.529306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:25.545050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:25.553760Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:25.561528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:25.617195Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34360","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:42:36 up  2:25,  0 user,  load average: 4.23, 3.45, 2.24
	Linux newest-cni-677710 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a63d140e368bd2540b11b1fbd31924bec72850f70f2141ff1764ef6771f6e323] <==
	I1027 19:42:34.763565       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 19:42:34.763797       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1027 19:42:34.763927       1 main.go:148] setting mtu 1500 for CNI 
	I1027 19:42:34.763943       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 19:42:34.763962       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T19:42:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 19:42:35.061100       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 19:42:35.061147       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 19:42:35.061162       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 19:42:35.061331       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1027 19:42:35.362213       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 19:42:35.362258       1 metrics.go:72] Registering metrics
	I1027 19:42:35.362366       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [4e3179586698b548177ed319808b4f2bcf8f075367fee27ea2ffa463f8336609] <==
	I1027 19:42:26.104523       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1027 19:42:26.104601       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1027 19:42:26.105192       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 19:42:26.107480       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1027 19:42:26.112534       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 19:42:26.114853       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 19:42:26.114936       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1027 19:42:26.141102       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 19:42:27.009168       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1027 19:42:27.013094       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1027 19:42:27.013115       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 19:42:27.572448       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 19:42:27.622959       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 19:42:27.713540       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1027 19:42:27.720730       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1027 19:42:27.721908       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 19:42:27.727635       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 19:42:28.203998       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 19:42:28.965721       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 19:42:28.975870       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1027 19:42:28.984750       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1027 19:42:34.109457       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 19:42:34.162861       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 19:42:34.169655       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 19:42:34.209053       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [5498f7504e0e9e7abce3ac60861b8453d91353e44ae96ba20a5037e6086c7bd5] <==
	I1027 19:42:33.204201       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1027 19:42:33.204516       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1027 19:42:33.204552       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1027 19:42:33.204566       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 19:42:33.204734       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 19:42:33.204864       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1027 19:42:33.204890       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1027 19:42:33.205844       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1027 19:42:33.206323       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 19:42:33.206402       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 19:42:33.206722       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 19:42:33.206931       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 19:42:33.209420       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 19:42:33.214574       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 19:42:33.226817       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1027 19:42:33.226913       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1027 19:42:33.226954       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1027 19:42:33.226961       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1027 19:42:33.226966       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1027 19:42:33.229061       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1027 19:42:33.229232       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1027 19:42:33.229343       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-677710"
	I1027 19:42:33.229405       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1027 19:42:33.234667       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 19:42:33.235395       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-677710" podCIDRs=["10.42.0.0/24"]
	
	
	==> kube-proxy [0713ea12eb39d32b86d085a3580da6f31d3f0c868537d4962fbee222f7b1318f] <==
	I1027 19:42:34.667714       1 server_linux.go:53] "Using iptables proxy"
	I1027 19:42:34.743100       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 19:42:34.845336       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 19:42:34.845393       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1027 19:42:34.845512       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 19:42:34.870810       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 19:42:34.870876       1 server_linux.go:132] "Using iptables Proxier"
	I1027 19:42:34.876278       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 19:42:34.876728       1 server.go:527] "Version info" version="v1.34.1"
	I1027 19:42:34.876762       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:42:34.878446       1 config.go:200] "Starting service config controller"
	I1027 19:42:34.878470       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 19:42:34.878531       1 config.go:106] "Starting endpoint slice config controller"
	I1027 19:42:34.878532       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 19:42:34.878549       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 19:42:34.878562       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 19:42:34.878838       1 config.go:309] "Starting node config controller"
	I1027 19:42:34.878865       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 19:42:34.878872       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 19:42:34.978737       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 19:42:34.978757       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 19:42:34.978987       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [23a2a54bea863093a527b9f986dcc1638094591fc8a01c33ff048d84cc79a9bc] <==
	E1027 19:42:26.062042       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 19:42:26.062057       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1027 19:42:26.062173       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 19:42:26.062180       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 19:42:26.062215       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 19:42:26.062313       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 19:42:26.062317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 19:42:26.062341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 19:42:26.062403       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 19:42:26.062436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 19:42:26.062346       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 19:42:26.864364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 19:42:26.889694       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 19:42:26.896947       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 19:42:26.987804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 19:42:27.038872       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 19:42:27.101340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1027 19:42:27.118663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1027 19:42:27.147892       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 19:42:27.151976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 19:42:27.154009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 19:42:27.220633       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 19:42:27.270209       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 19:42:27.276371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1027 19:42:29.857329       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 19:42:28 newest-cni-677710 kubelet[1321]: I1027 19:42:28.982166    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/c0c165ea08c83cf0eb8a174d04e64d30-etcd-data\") pod \"etcd-newest-cni-677710\" (UID: \"c0c165ea08c83cf0eb8a174d04e64d30\") " pod="kube-system/etcd-newest-cni-677710"
	Oct 27 19:42:28 newest-cni-677710 kubelet[1321]: I1027 19:42:28.982189    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8ff6a90e264e9bdcd11d719e30c4b094-ca-certs\") pod \"kube-apiserver-newest-cni-677710\" (UID: \"8ff6a90e264e9bdcd11d719e30c4b094\") " pod="kube-system/kube-apiserver-newest-cni-677710"
	Oct 27 19:42:28 newest-cni-677710 kubelet[1321]: I1027 19:42:28.982209    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8ff6a90e264e9bdcd11d719e30c4b094-etc-ca-certificates\") pod \"kube-apiserver-newest-cni-677710\" (UID: \"8ff6a90e264e9bdcd11d719e30c4b094\") " pod="kube-system/kube-apiserver-newest-cni-677710"
	Oct 27 19:42:29 newest-cni-677710 kubelet[1321]: I1027 19:42:29.770394    1321 apiserver.go:52] "Watching apiserver"
	Oct 27 19:42:29 newest-cni-677710 kubelet[1321]: I1027 19:42:29.780949    1321 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 27 19:42:29 newest-cni-677710 kubelet[1321]: I1027 19:42:29.815573    1321 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-677710"
	Oct 27 19:42:29 newest-cni-677710 kubelet[1321]: I1027 19:42:29.816074    1321 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-677710"
	Oct 27 19:42:29 newest-cni-677710 kubelet[1321]: E1027 19:42:29.823585    1321 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-677710\" already exists" pod="kube-system/kube-apiserver-newest-cni-677710"
	Oct 27 19:42:29 newest-cni-677710 kubelet[1321]: E1027 19:42:29.824332    1321 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-677710\" already exists" pod="kube-system/etcd-newest-cni-677710"
	Oct 27 19:42:29 newest-cni-677710 kubelet[1321]: I1027 19:42:29.841213    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-677710" podStartSLOduration=1.8411942529999998 podStartE2EDuration="1.841194253s" podCreationTimestamp="2025-10-27 19:42:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:42:29.841107502 +0000 UTC m=+1.135127623" watchObservedRunningTime="2025-10-27 19:42:29.841194253 +0000 UTC m=+1.135214372"
	Oct 27 19:42:29 newest-cni-677710 kubelet[1321]: I1027 19:42:29.861828    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-677710" podStartSLOduration=1.8618099190000001 podStartE2EDuration="1.861809919s" podCreationTimestamp="2025-10-27 19:42:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:42:29.851940365 +0000 UTC m=+1.145960483" watchObservedRunningTime="2025-10-27 19:42:29.861809919 +0000 UTC m=+1.155830038"
	Oct 27 19:42:29 newest-cni-677710 kubelet[1321]: I1027 19:42:29.861978    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-677710" podStartSLOduration=1.861969227 podStartE2EDuration="1.861969227s" podCreationTimestamp="2025-10-27 19:42:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:42:29.86166813 +0000 UTC m=+1.155688249" watchObservedRunningTime="2025-10-27 19:42:29.861969227 +0000 UTC m=+1.155989343"
	Oct 27 19:42:29 newest-cni-677710 kubelet[1321]: I1027 19:42:29.875212    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-677710" podStartSLOduration=1.875190723 podStartE2EDuration="1.875190723s" podCreationTimestamp="2025-10-27 19:42:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:42:29.875032342 +0000 UTC m=+1.169052462" watchObservedRunningTime="2025-10-27 19:42:29.875190723 +0000 UTC m=+1.169210837"
	Oct 27 19:42:33 newest-cni-677710 kubelet[1321]: I1027 19:42:33.324006    1321 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 27 19:42:33 newest-cni-677710 kubelet[1321]: I1027 19:42:33.324760    1321 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 27 19:42:34 newest-cni-677710 kubelet[1321]: I1027 19:42:34.316049    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e1b6e2a6-b271-4a01-8cfe-c10f73bd2f4d-cni-cfg\") pod \"kindnet-w6m47\" (UID: \"e1b6e2a6-b271-4a01-8cfe-c10f73bd2f4d\") " pod="kube-system/kindnet-w6m47"
	Oct 27 19:42:34 newest-cni-677710 kubelet[1321]: I1027 19:42:34.316094    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsh9f\" (UniqueName: \"kubernetes.io/projected/e1b6e2a6-b271-4a01-8cfe-c10f73bd2f4d-kube-api-access-zsh9f\") pod \"kindnet-w6m47\" (UID: \"e1b6e2a6-b271-4a01-8cfe-c10f73bd2f4d\") " pod="kube-system/kindnet-w6m47"
	Oct 27 19:42:34 newest-cni-677710 kubelet[1321]: I1027 19:42:34.316112    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/89658cd8-0d1d-4a33-a913-add5cbd50df0-kube-proxy\") pod \"kube-proxy-zg8ds\" (UID: \"89658cd8-0d1d-4a33-a913-add5cbd50df0\") " pod="kube-system/kube-proxy-zg8ds"
	Oct 27 19:42:34 newest-cni-677710 kubelet[1321]: I1027 19:42:34.316127    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t6x7\" (UniqueName: \"kubernetes.io/projected/89658cd8-0d1d-4a33-a913-add5cbd50df0-kube-api-access-6t6x7\") pod \"kube-proxy-zg8ds\" (UID: \"89658cd8-0d1d-4a33-a913-add5cbd50df0\") " pod="kube-system/kube-proxy-zg8ds"
	Oct 27 19:42:34 newest-cni-677710 kubelet[1321]: I1027 19:42:34.316175    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1b6e2a6-b271-4a01-8cfe-c10f73bd2f4d-xtables-lock\") pod \"kindnet-w6m47\" (UID: \"e1b6e2a6-b271-4a01-8cfe-c10f73bd2f4d\") " pod="kube-system/kindnet-w6m47"
	Oct 27 19:42:34 newest-cni-677710 kubelet[1321]: I1027 19:42:34.316268    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89658cd8-0d1d-4a33-a913-add5cbd50df0-xtables-lock\") pod \"kube-proxy-zg8ds\" (UID: \"89658cd8-0d1d-4a33-a913-add5cbd50df0\") " pod="kube-system/kube-proxy-zg8ds"
	Oct 27 19:42:34 newest-cni-677710 kubelet[1321]: I1027 19:42:34.316334    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1b6e2a6-b271-4a01-8cfe-c10f73bd2f4d-lib-modules\") pod \"kindnet-w6m47\" (UID: \"e1b6e2a6-b271-4a01-8cfe-c10f73bd2f4d\") " pod="kube-system/kindnet-w6m47"
	Oct 27 19:42:34 newest-cni-677710 kubelet[1321]: I1027 19:42:34.316401    1321 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89658cd8-0d1d-4a33-a913-add5cbd50df0-lib-modules\") pod \"kube-proxy-zg8ds\" (UID: \"89658cd8-0d1d-4a33-a913-add5cbd50df0\") " pod="kube-system/kube-proxy-zg8ds"
	Oct 27 19:42:34 newest-cni-677710 kubelet[1321]: I1027 19:42:34.854659    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-w6m47" podStartSLOduration=0.854633859 podStartE2EDuration="854.633859ms" podCreationTimestamp="2025-10-27 19:42:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:42:34.85463015 +0000 UTC m=+6.148650270" watchObservedRunningTime="2025-10-27 19:42:34.854633859 +0000 UTC m=+6.148654031"
	Oct 27 19:42:34 newest-cni-677710 kubelet[1321]: I1027 19:42:34.854833    1321 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zg8ds" podStartSLOduration=0.854822836 podStartE2EDuration="854.822836ms" podCreationTimestamp="2025-10-27 19:42:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 19:42:34.841661998 +0000 UTC m=+6.135682117" watchObservedRunningTime="2025-10-27 19:42:34.854822836 +0000 UTC m=+6.148842955"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-677710 -n newest-cni-677710
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-677710 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-rv72d storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-677710 describe pod coredns-66bc5c9577-rv72d storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-677710 describe pod coredns-66bc5c9577-rv72d storage-provisioner: exit status 1 (77.82477ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-rv72d" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-677710 describe pod coredns-66bc5c9577-rv72d storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-677710 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-677710 --alsologtostderr -v=1: exit status 80 (1.828024513s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-677710 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:42:59.610279  627432 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:42:59.610453  627432 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:42:59.610468  627432 out.go:374] Setting ErrFile to fd 2...
	I1027 19:42:59.610474  627432 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:42:59.610732  627432 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 19:42:59.611075  627432 out.go:368] Setting JSON to false
	I1027 19:42:59.611193  627432 mustload.go:65] Loading cluster: newest-cni-677710
	I1027 19:42:59.611779  627432 config.go:182] Loaded profile config "newest-cni-677710": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:42:59.612435  627432 cli_runner.go:164] Run: docker container inspect newest-cni-677710 --format={{.State.Status}}
	I1027 19:42:59.633578  627432 host.go:66] Checking if "newest-cni-677710" exists ...
	I1027 19:42:59.634003  627432 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:42:59.703337  627432 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:88 SystemTime:2025-10-27 19:42:59.691603962 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:42:59.704187  627432 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-677710 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1027 19:42:59.706080  627432 out.go:179] * Pausing node newest-cni-677710 ... 
	I1027 19:42:59.707466  627432 host.go:66] Checking if "newest-cni-677710" exists ...
	I1027 19:42:59.707766  627432 ssh_runner.go:195] Run: systemctl --version
	I1027 19:42:59.707811  627432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-677710
	I1027 19:42:59.730078  627432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33470 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/newest-cni-677710/id_rsa Username:docker}
	I1027 19:42:59.837466  627432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:42:59.853265  627432 pause.go:52] kubelet running: true
	I1027 19:42:59.853364  627432 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 19:42:59.999395  627432 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 19:42:59.999510  627432 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 19:43:00.078490  627432 cri.go:89] found id: "147a5a9fd3a0b246651bfeafac4f2aa71fa51126a3a0b5650e6b7fee3004479f"
	I1027 19:43:00.078517  627432 cri.go:89] found id: "f2220f3ce73097ad9f9c01a9488587842fecbdfe180097b0e67b5c96c7b7cb68"
	I1027 19:43:00.078520  627432 cri.go:89] found id: "44a615e284a4d0c1a9cd591789628eef5abb12b0322cda33a9c30c087dbfcc6c"
	I1027 19:43:00.078523  627432 cri.go:89] found id: "41f7a785712cd25ed1b323d20fb0cdc81e6e1275a58915470cc7154bf52a2176"
	I1027 19:43:00.078526  627432 cri.go:89] found id: "73167847edee577234af246f7849876030d523eb4d523a4c2b5bbd0694b79ad5"
	I1027 19:43:00.078530  627432 cri.go:89] found id: "cece14cf0a526d107de8be5cc2a837da1df540d883b98e8589946416af07067b"
	I1027 19:43:00.078532  627432 cri.go:89] found id: ""
	I1027 19:43:00.078570  627432 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:43:00.091303  627432 retry.go:31] will retry after 212.17523ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:43:00Z" level=error msg="open /run/runc: no such file or directory"
	I1027 19:43:00.303731  627432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:43:00.319787  627432 pause.go:52] kubelet running: false
	I1027 19:43:00.319884  627432 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 19:43:00.470370  627432 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 19:43:00.470463  627432 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 19:43:00.551305  627432 cri.go:89] found id: "147a5a9fd3a0b246651bfeafac4f2aa71fa51126a3a0b5650e6b7fee3004479f"
	I1027 19:43:00.551330  627432 cri.go:89] found id: "f2220f3ce73097ad9f9c01a9488587842fecbdfe180097b0e67b5c96c7b7cb68"
	I1027 19:43:00.551334  627432 cri.go:89] found id: "44a615e284a4d0c1a9cd591789628eef5abb12b0322cda33a9c30c087dbfcc6c"
	I1027 19:43:00.551337  627432 cri.go:89] found id: "41f7a785712cd25ed1b323d20fb0cdc81e6e1275a58915470cc7154bf52a2176"
	I1027 19:43:00.551340  627432 cri.go:89] found id: "73167847edee577234af246f7849876030d523eb4d523a4c2b5bbd0694b79ad5"
	I1027 19:43:00.551345  627432 cri.go:89] found id: "cece14cf0a526d107de8be5cc2a837da1df540d883b98e8589946416af07067b"
	I1027 19:43:00.551348  627432 cri.go:89] found id: ""
	I1027 19:43:00.551406  627432 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:43:00.565104  627432 retry.go:31] will retry after 511.736316ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:43:00Z" level=error msg="open /run/runc: no such file or directory"
	I1027 19:43:01.077927  627432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:43:01.093977  627432 pause.go:52] kubelet running: false
	I1027 19:43:01.094037  627432 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 19:43:01.256409  627432 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 19:43:01.256501  627432 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 19:43:01.333810  627432 cri.go:89] found id: "147a5a9fd3a0b246651bfeafac4f2aa71fa51126a3a0b5650e6b7fee3004479f"
	I1027 19:43:01.333853  627432 cri.go:89] found id: "f2220f3ce73097ad9f9c01a9488587842fecbdfe180097b0e67b5c96c7b7cb68"
	I1027 19:43:01.333859  627432 cri.go:89] found id: "44a615e284a4d0c1a9cd591789628eef5abb12b0322cda33a9c30c087dbfcc6c"
	I1027 19:43:01.333862  627432 cri.go:89] found id: "41f7a785712cd25ed1b323d20fb0cdc81e6e1275a58915470cc7154bf52a2176"
	I1027 19:43:01.333868  627432 cri.go:89] found id: "73167847edee577234af246f7849876030d523eb4d523a4c2b5bbd0694b79ad5"
	I1027 19:43:01.333872  627432 cri.go:89] found id: "cece14cf0a526d107de8be5cc2a837da1df540d883b98e8589946416af07067b"
	I1027 19:43:01.333876  627432 cri.go:89] found id: ""
	I1027 19:43:01.333923  627432 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:43:01.350196  627432 out.go:203] 
	W1027 19:43:01.351977  627432 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:43:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:43:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 19:43:01.352027  627432 out.go:285] * 
	* 
	W1027 19:43:01.357451  627432 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 19:43:01.359179  627432 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-677710 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-677710
helpers_test.go:243: (dbg) docker inspect newest-cni-677710:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "62fa20be8510c822a3af68e19caac8c39efa6f456f35c096ab55d9be979a15a7",
	        "Created": "2025-10-27T19:42:13.174761527Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 623139,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T19:42:45.840955816Z",
	            "FinishedAt": "2025-10-27T19:42:44.813791073Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/62fa20be8510c822a3af68e19caac8c39efa6f456f35c096ab55d9be979a15a7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/62fa20be8510c822a3af68e19caac8c39efa6f456f35c096ab55d9be979a15a7/hostname",
	        "HostsPath": "/var/lib/docker/containers/62fa20be8510c822a3af68e19caac8c39efa6f456f35c096ab55d9be979a15a7/hosts",
	        "LogPath": "/var/lib/docker/containers/62fa20be8510c822a3af68e19caac8c39efa6f456f35c096ab55d9be979a15a7/62fa20be8510c822a3af68e19caac8c39efa6f456f35c096ab55d9be979a15a7-json.log",
	        "Name": "/newest-cni-677710",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-677710:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-677710",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "62fa20be8510c822a3af68e19caac8c39efa6f456f35c096ab55d9be979a15a7",
	                "LowerDir": "/var/lib/docker/overlay2/cb56fe71dd86daf61eed2c8feacba9932a7ceba7713d274439236e8bf12ab0c5-init/diff:/var/lib/docker/overlay2/71b61ec94610a35f2d924dec358052d4c154c36b3fe219802f60246ca2dc7f45/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cb56fe71dd86daf61eed2c8feacba9932a7ceba7713d274439236e8bf12ab0c5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cb56fe71dd86daf61eed2c8feacba9932a7ceba7713d274439236e8bf12ab0c5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cb56fe71dd86daf61eed2c8feacba9932a7ceba7713d274439236e8bf12ab0c5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-677710",
	                "Source": "/var/lib/docker/volumes/newest-cni-677710/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-677710",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-677710",
	                "name.minikube.sigs.k8s.io": "newest-cni-677710",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "95885c2842549bdbef9832614fc4a83000820e3bdc2178f50d16d47481af6228",
	            "SandboxKey": "/var/run/docker/netns/95885c284254",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33470"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33471"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33474"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33472"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33473"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-677710": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d6:cc:97:e9:91:f5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "de6d8a0af7735692f0f5ebcc3cc03e69c8662e213ca8fd268387cc9a0ddf92b8",
	                    "EndpointID": "67d640bb52a67e909b2d2906009f7a8a0cd1507c951892926b68c9e90e85f1fa",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-677710",
	                        "62fa20be8510"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-677710 -n newest-cni-677710
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-677710 -n newest-cni-677710: exit status 2 (372.43555ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-677710 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-677710 logs -n 25: (1.223243354s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p default-k8s-diff-port-813397 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-813397 │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:42 UTC │
	│ addons  │ enable dashboard -p no-preload-095885 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ start   │ -p no-preload-095885 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:42 UTC │
	│ image   │ embed-certs-919237 image list --format=json                                                                                                                                                                                                   │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ pause   │ -p embed-certs-919237 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │                     │
	│ delete  │ -p embed-certs-919237                                                                                                                                                                                                                         │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ delete  │ -p embed-certs-919237                                                                                                                                                                                                                         │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ start   │ -p newest-cni-677710 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-677710            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-813397 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-813397 │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-813397 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-813397 │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-813397 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-813397 │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ start   │ -p default-k8s-diff-port-813397 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-813397 │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	│ image   │ no-preload-095885 image list --format=json                                                                                                                                                                                                    │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ pause   │ -p no-preload-095885 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-677710 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-677710            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	│ stop    │ -p newest-cni-677710 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-677710            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ delete  │ -p no-preload-095885                                                                                                                                                                                                                          │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ delete  │ -p no-preload-095885                                                                                                                                                                                                                          │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ start   │ -p auto-387383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-387383                  │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-677710 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-677710            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ start   │ -p newest-cni-677710 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-677710            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ start   │ -p kubernetes-upgrade-360986 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-360986    │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	│ image   │ newest-cni-677710 image list --format=json                                                                                                                                                                                                    │ newest-cni-677710            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ start   │ -p kubernetes-upgrade-360986 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-360986    │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	│ pause   │ -p newest-cni-677710 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-677710            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 19:42:59
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 19:42:59.326467  627244 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:42:59.326737  627244 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:42:59.326748  627244 out.go:374] Setting ErrFile to fd 2...
	I1027 19:42:59.326752  627244 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:42:59.326933  627244 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 19:42:59.327391  627244 out.go:368] Setting JSON to false
	I1027 19:42:59.328804  627244 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8728,"bootTime":1761585451,"procs":443,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 19:42:59.328889  627244 start.go:141] virtualization: kvm guest
	I1027 19:42:59.331041  627244 out.go:179] * [kubernetes-upgrade-360986] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 19:42:59.332118  627244 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 19:42:59.332110  627244 notify.go:220] Checking for updates...
	I1027 19:42:59.334362  627244 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 19:42:59.335707  627244 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:42:59.336856  627244 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-352833/.minikube
	I1027 19:42:59.337909  627244 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 19:42:59.338956  627244 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 19:42:59.340627  627244 config.go:182] Loaded profile config "kubernetes-upgrade-360986": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:42:59.341384  627244 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 19:42:59.372269  627244 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 19:42:59.372461  627244 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:42:59.450648  627244 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-27 19:42:59.440108546 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:42:59.450765  627244 docker.go:318] overlay module found
	I1027 19:42:59.452587  627244 out.go:179] * Using the docker driver based on existing profile
	I1027 19:42:59.453958  627244 start.go:305] selected driver: docker
	I1027 19:42:59.453983  627244 start.go:925] validating driver "docker" against &{Name:kubernetes-upgrade-360986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-360986 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:42:59.454091  627244 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 19:42:59.454814  627244 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:42:59.521437  627244 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-27 19:42:59.508704904 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:42:59.521931  627244 cni.go:84] Creating CNI manager for ""
	I1027 19:42:59.522017  627244 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:42:59.522067  627244 start.go:349] cluster config:
	{Name:kubernetes-upgrade-360986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-360986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAge
ntPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:42:59.524410  627244 out.go:179] * Starting "kubernetes-upgrade-360986" primary control-plane node in "kubernetes-upgrade-360986" cluster
	I1027 19:42:59.525658  627244 cache.go:123] Beginning downloading kic base image for docker with crio
	I1027 19:42:59.527002  627244 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 19:42:59.528409  627244 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:42:59.528467  627244 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1027 19:42:59.528486  627244 cache.go:58] Caching tarball of preloaded images
	I1027 19:42:59.528542  627244 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 19:42:59.528608  627244 preload.go:233] Found /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1027 19:42:59.528624  627244 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 19:42:59.528735  627244 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kubernetes-upgrade-360986/config.json ...
	I1027 19:42:59.554899  627244 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 19:42:59.554924  627244 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 19:42:59.554945  627244 cache.go:232] Successfully downloaded all kic artifacts
	I1027 19:42:59.554975  627244 start.go:360] acquireMachinesLock for kubernetes-upgrade-360986: {Name:mkf8f27ced9f308ced512f76c3a6bd5971edf40f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:42:59.555049  627244 start.go:364] duration metric: took 48.863µs to acquireMachinesLock for "kubernetes-upgrade-360986"
	I1027 19:42:59.555074  627244 start.go:96] Skipping create...Using existing machine configuration
	I1027 19:42:59.555081  627244 fix.go:54] fixHost starting: 
	I1027 19:42:59.555388  627244 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-360986 --format={{.State.Status}}
	I1027 19:42:59.577702  627244 fix.go:112] recreateIfNeeded on kubernetes-upgrade-360986: state=Running err=<nil>
	W1027 19:42:59.577741  627244 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.007459313Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.011995812Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=ec6683b4-a830-487e-b027-1992b92a4851 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.013810264Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=2b1b7428-f72f-478a-9f72-9ea2e344429b name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.015051876Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.015769736Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.016027947Z" level=info msg="Ran pod sandbox 44a3d69e30e259441eb11a1733586eabfb1cda95d794a3e80b737cbf35c598e2 with infra container: kube-system/kube-proxy-zg8ds/POD" id=ec6683b4-a830-487e-b027-1992b92a4851 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.016652394Z" level=info msg="Ran pod sandbox d41433b7b4244e4fcc60f6aa5115561ee4e19f2e8bb290e903e2d494d7edef84 with infra container: kube-system/kindnet-w6m47/POD" id=2b1b7428-f72f-478a-9f72-9ea2e344429b name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.017505226Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=ed145495-c88e-468f-bee4-33245b4933c9 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.017716621Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=509388e4-762b-4e95-911f-5fde88445a72 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.018822249Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=1bb7b900-1e0c-40c1-975e-1233b2cfae5d name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.019914719Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=d94ecd65-5b80-41ba-a57e-027914d6c2f5 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.02009217Z" level=info msg="Creating container: kube-system/kube-proxy-zg8ds/kube-proxy" id=4653076f-3af6-4556-ad23-983ee8e6e04c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.020267429Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.021071851Z" level=info msg="Creating container: kube-system/kindnet-w6m47/kindnet-cni" id=962d7982-8851-4012-9513-f02c2f082338 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.021214114Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.027109233Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.027776371Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.029167224Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.029845076Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.064819577Z" level=info msg="Created container 147a5a9fd3a0b246651bfeafac4f2aa71fa51126a3a0b5650e6b7fee3004479f: kube-system/kindnet-w6m47/kindnet-cni" id=962d7982-8851-4012-9513-f02c2f082338 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.065824938Z" level=info msg="Starting container: 147a5a9fd3a0b246651bfeafac4f2aa71fa51126a3a0b5650e6b7fee3004479f" id=de439795-0464-468f-a2fc-c5b1a8a7cab5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.069756736Z" level=info msg="Created container f2220f3ce73097ad9f9c01a9488587842fecbdfe180097b0e67b5c96c7b7cb68: kube-system/kube-proxy-zg8ds/kube-proxy" id=4653076f-3af6-4556-ad23-983ee8e6e04c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.070633736Z" level=info msg="Starting container: f2220f3ce73097ad9f9c01a9488587842fecbdfe180097b0e67b5c96c7b7cb68" id=c05a4a3e-ec2a-4da8-a41a-b5ba9e87fe4c name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.071499819Z" level=info msg="Started container" PID=1035 containerID=147a5a9fd3a0b246651bfeafac4f2aa71fa51126a3a0b5650e6b7fee3004479f description=kube-system/kindnet-w6m47/kindnet-cni id=de439795-0464-468f-a2fc-c5b1a8a7cab5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d41433b7b4244e4fcc60f6aa5115561ee4e19f2e8bb290e903e2d494d7edef84
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.074725104Z" level=info msg="Started container" PID=1034 containerID=f2220f3ce73097ad9f9c01a9488587842fecbdfe180097b0e67b5c96c7b7cb68 description=kube-system/kube-proxy-zg8ds/kube-proxy id=c05a4a3e-ec2a-4da8-a41a-b5ba9e87fe4c name=/runtime.v1.RuntimeService/StartContainer sandboxID=44a3d69e30e259441eb11a1733586eabfb1cda95d794a3e80b737cbf35c598e2
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	147a5a9fd3a0b       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   d41433b7b4244       kindnet-w6m47                               kube-system
	f2220f3ce7309       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   4 seconds ago       Running             kube-proxy                1                   44a3d69e30e25       kube-proxy-zg8ds                            kube-system
	44a615e284a4d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   8 seconds ago       Running             kube-apiserver            1                   65704888bea48       kube-apiserver-newest-cni-677710            kube-system
	41f7a785712cd       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   8 seconds ago       Running             etcd                      1                   2dbe909ae20fc       etcd-newest-cni-677710                      kube-system
	73167847edee5       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   8 seconds ago       Running             kube-controller-manager   1                   11f985a2eee8b       kube-controller-manager-newest-cni-677710   kube-system
	cece14cf0a526       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   8 seconds ago       Running             kube-scheduler            1                   f781a9623a17b       kube-scheduler-newest-cni-677710            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-677710
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-677710
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=newest-cni-677710
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T19_42_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 19:42:26 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-677710
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 19:42:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 19:42:57 +0000   Mon, 27 Oct 2025 19:42:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 19:42:57 +0000   Mon, 27 Oct 2025 19:42:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 19:42:57 +0000   Mon, 27 Oct 2025 19:42:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 27 Oct 2025 19:42:57 +0000   Mon, 27 Oct 2025 19:42:24 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-677710
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                0e5b8dde-ff0d-4017-8bb0-5ec4905459bd
	  Boot ID:                    811bd29c-e64e-4acc-9427-bab1f7caed93
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-677710                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         34s
	  kube-system                 kindnet-w6m47                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-newest-cni-677710             250m (3%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-newest-cni-677710    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-zg8ds                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-newest-cni-677710             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age              From             Message
	  ----    ------                   ----             ----             -------
	  Normal  Starting                 27s              kube-proxy       
	  Normal  Starting                 4s               kube-proxy       
	  Normal  Starting                 34s              kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s              kubelet          Node newest-cni-677710 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s              kubelet          Node newest-cni-677710 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s              kubelet          Node newest-cni-677710 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s              node-controller  Node newest-cni-677710 event: Registered Node newest-cni-677710 in Controller
	  Normal  Starting                 9s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 9s)  kubelet          Node newest-cni-677710 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 9s)  kubelet          Node newest-cni-677710 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x8 over 9s)  kubelet          Node newest-cni-677710 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2s               node-controller  Node newest-cni-677710 event: Registered Node newest-cni-677710 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 23 52 43 9a ba 08 06
	[  +0.000398] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 50 95 0e df 53 08 06
	[Oct27 18:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.017295] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023893] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023882] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023851] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +2.047849] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +4.031592] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +8.319143] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[ +16.382183] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[Oct27 19:00] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	
	
	==> etcd [41f7a785712cd25ed1b323d20fb0cdc81e6e1275a58915470cc7154bf52a2176] <==
	{"level":"warn","ts":"2025-10-27T19:42:56.123959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.135353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.157713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.172698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.187290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.196406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.205711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.217762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.222039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.234217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.252833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.259522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.270208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.280609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.297125Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.320060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.326395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.335589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.356968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.369742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.385637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.390946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.399376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.408713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.491004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51452","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:43:02 up  2:25,  0 user,  load average: 6.69, 4.07, 2.47
	Linux newest-cni-677710 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [147a5a9fd3a0b246651bfeafac4f2aa71fa51126a3a0b5650e6b7fee3004479f] <==
	I1027 19:42:58.248668       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 19:42:58.249107       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1027 19:42:58.249276       1 main.go:148] setting mtu 1500 for CNI 
	I1027 19:42:58.249294       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 19:42:58.249313       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T19:42:58Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 19:42:58.556744       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 19:42:58.556773       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 19:42:58.556784       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 19:42:58.643471       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1027 19:42:58.856896       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 19:42:58.856964       1 metrics.go:72] Registering metrics
	I1027 19:42:58.857082       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [44a615e284a4d0c1a9cd591789628eef5abb12b0322cda33a9c30c087dbfcc6c] <==
	I1027 19:42:57.100584       1 aggregator.go:171] initial CRD sync complete...
	I1027 19:42:57.100595       1 autoregister_controller.go:144] Starting autoregister controller
	I1027 19:42:57.100603       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 19:42:57.100610       1 cache.go:39] Caches are synced for autoregister controller
	I1027 19:42:57.101066       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1027 19:42:57.101287       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 19:42:57.102214       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1027 19:42:57.104598       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1027 19:42:57.107797       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1027 19:42:57.107824       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1027 19:42:57.129338       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 19:42:57.132782       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 19:42:57.133370       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1027 19:42:57.133432       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 19:42:57.503271       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 19:42:57.537249       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 19:42:57.560385       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 19:42:57.570708       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 19:42:57.578224       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 19:42:57.618927       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.109.175"}
	I1027 19:42:57.630683       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.139.166"}
	I1027 19:42:57.999646       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 19:43:00.767437       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 19:43:00.817401       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 19:43:00.917050       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [73167847edee577234af246f7849876030d523eb4d523a4c2b5bbd0694b79ad5] <==
	I1027 19:43:00.390933       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 19:43:00.390963       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1027 19:43:00.392061       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1027 19:43:00.393406       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 19:43:00.396422       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1027 19:43:00.399107       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 19:43:00.413110       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 19:43:00.413150       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 19:43:00.413159       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1027 19:43:00.413807       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1027 19:43:00.413907       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1027 19:43:00.414187       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 19:43:00.414191       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1027 19:43:00.414626       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1027 19:43:00.415036       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 19:43:00.415059       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1027 19:43:00.415077       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 19:43:00.421433       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 19:43:00.422597       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 19:43:00.426467       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 19:43:00.429631       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 19:43:00.433334       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1027 19:43:00.436631       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 19:43:00.441913       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1027 19:43:00.492407       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [f2220f3ce73097ad9f9c01a9488587842fecbdfe180097b0e67b5c96c7b7cb68] <==
	I1027 19:42:58.122634       1 server_linux.go:53] "Using iptables proxy"
	I1027 19:42:58.200564       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 19:42:58.301479       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 19:42:58.301522       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1027 19:42:58.301636       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 19:42:58.324903       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 19:42:58.324973       1 server_linux.go:132] "Using iptables Proxier"
	I1027 19:42:58.331152       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 19:42:58.331655       1 server.go:527] "Version info" version="v1.34.1"
	I1027 19:42:58.331702       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:42:58.333115       1 config.go:200] "Starting service config controller"
	I1027 19:42:58.333143       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 19:42:58.333199       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 19:42:58.333212       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 19:42:58.333240       1 config.go:106] "Starting endpoint slice config controller"
	I1027 19:42:58.333245       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 19:42:58.333312       1 config.go:309] "Starting node config controller"
	I1027 19:42:58.333323       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 19:42:58.433286       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 19:42:58.433313       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 19:42:58.433402       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 19:42:58.433568       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [cece14cf0a526d107de8be5cc2a837da1df540d883b98e8589946416af07067b] <==
	I1027 19:42:55.870990       1 serving.go:386] Generated self-signed cert in-memory
	I1027 19:42:57.069483       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 19:42:57.069517       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:42:57.077276       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1027 19:42:57.077534       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1027 19:42:57.078129       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 19:42:57.077417       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:42:57.078249       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 19:42:57.078255       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:42:57.077454       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 19:42:57.078448       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 19:42:57.178833       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 19:42:57.178924       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1027 19:42:57.184912       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 19:42:55 newest-cni-677710 kubelet[661]: E1027 19:42:55.814194     661 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-677710\" not found" node="newest-cni-677710"
	Oct 27 19:42:55 newest-cni-677710 kubelet[661]: E1027 19:42:55.816241     661 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-677710\" not found" node="newest-cni-677710"
	Oct 27 19:42:55 newest-cni-677710 kubelet[661]: E1027 19:42:55.817112     661 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-677710\" not found" node="newest-cni-677710"
	Oct 27 19:42:57 newest-cni-677710 kubelet[661]: I1027 19:42:57.102331     661 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-677710"
	Oct 27 19:42:57 newest-cni-677710 kubelet[661]: I1027 19:42:57.150621     661 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-677710"
	Oct 27 19:42:57 newest-cni-677710 kubelet[661]: I1027 19:42:57.150739     661 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-677710"
	Oct 27 19:42:57 newest-cni-677710 kubelet[661]: I1027 19:42:57.150779     661 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 27 19:42:57 newest-cni-677710 kubelet[661]: I1027 19:42:57.151675     661 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 27 19:42:57 newest-cni-677710 kubelet[661]: E1027 19:42:57.161106     661 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-677710\" already exists" pod="kube-system/kube-scheduler-newest-cni-677710"
	Oct 27 19:42:57 newest-cni-677710 kubelet[661]: I1027 19:42:57.161510     661 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-677710"
	Oct 27 19:42:57 newest-cni-677710 kubelet[661]: E1027 19:42:57.180626     661 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-677710\" already exists" pod="kube-system/etcd-newest-cni-677710"
	Oct 27 19:42:57 newest-cni-677710 kubelet[661]: I1027 19:42:57.180674     661 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-677710"
	Oct 27 19:42:57 newest-cni-677710 kubelet[661]: E1027 19:42:57.192422     661 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-677710\" already exists" pod="kube-system/kube-apiserver-newest-cni-677710"
	Oct 27 19:42:57 newest-cni-677710 kubelet[661]: I1027 19:42:57.192645     661 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-677710"
	Oct 27 19:42:57 newest-cni-677710 kubelet[661]: E1027 19:42:57.200794     661 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-677710\" already exists" pod="kube-system/kube-controller-manager-newest-cni-677710"
	Oct 27 19:42:57 newest-cni-677710 kubelet[661]: I1027 19:42:57.699111     661 apiserver.go:52] "Watching apiserver"
	Oct 27 19:42:57 newest-cni-677710 kubelet[661]: I1027 19:42:57.801619     661 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 27 19:42:57 newest-cni-677710 kubelet[661]: I1027 19:42:57.827595     661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1b6e2a6-b271-4a01-8cfe-c10f73bd2f4d-xtables-lock\") pod \"kindnet-w6m47\" (UID: \"e1b6e2a6-b271-4a01-8cfe-c10f73bd2f4d\") " pod="kube-system/kindnet-w6m47"
	Oct 27 19:42:57 newest-cni-677710 kubelet[661]: I1027 19:42:57.827674     661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89658cd8-0d1d-4a33-a913-add5cbd50df0-lib-modules\") pod \"kube-proxy-zg8ds\" (UID: \"89658cd8-0d1d-4a33-a913-add5cbd50df0\") " pod="kube-system/kube-proxy-zg8ds"
	Oct 27 19:42:57 newest-cni-677710 kubelet[661]: I1027 19:42:57.827723     661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89658cd8-0d1d-4a33-a913-add5cbd50df0-xtables-lock\") pod \"kube-proxy-zg8ds\" (UID: \"89658cd8-0d1d-4a33-a913-add5cbd50df0\") " pod="kube-system/kube-proxy-zg8ds"
	Oct 27 19:42:57 newest-cni-677710 kubelet[661]: I1027 19:42:57.827747     661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e1b6e2a6-b271-4a01-8cfe-c10f73bd2f4d-cni-cfg\") pod \"kindnet-w6m47\" (UID: \"e1b6e2a6-b271-4a01-8cfe-c10f73bd2f4d\") " pod="kube-system/kindnet-w6m47"
	Oct 27 19:42:57 newest-cni-677710 kubelet[661]: I1027 19:42:57.827777     661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1b6e2a6-b271-4a01-8cfe-c10f73bd2f4d-lib-modules\") pod \"kindnet-w6m47\" (UID: \"e1b6e2a6-b271-4a01-8cfe-c10f73bd2f4d\") " pod="kube-system/kindnet-w6m47"
	Oct 27 19:42:59 newest-cni-677710 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 19:42:59 newest-cni-677710 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 19:42:59 newest-cni-677710 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-677710 -n newest-cni-677710
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-677710 -n newest-cni-677710: exit status 2 (455.973388ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-677710 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-rv72d storage-provisioner dashboard-metrics-scraper-6ffb444bf9-rh7rf kubernetes-dashboard-855c9754f9-lflsk
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-677710 describe pod coredns-66bc5c9577-rv72d storage-provisioner dashboard-metrics-scraper-6ffb444bf9-rh7rf kubernetes-dashboard-855c9754f9-lflsk
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-677710 describe pod coredns-66bc5c9577-rv72d storage-provisioner dashboard-metrics-scraper-6ffb444bf9-rh7rf kubernetes-dashboard-855c9754f9-lflsk: exit status 1 (91.547499ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-rv72d" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-rh7rf" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-lflsk" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-677710 describe pod coredns-66bc5c9577-rv72d storage-provisioner dashboard-metrics-scraper-6ffb444bf9-rh7rf kubernetes-dashboard-855c9754f9-lflsk: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-677710
helpers_test.go:243: (dbg) docker inspect newest-cni-677710:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "62fa20be8510c822a3af68e19caac8c39efa6f456f35c096ab55d9be979a15a7",
	        "Created": "2025-10-27T19:42:13.174761527Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 623139,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T19:42:45.840955816Z",
	            "FinishedAt": "2025-10-27T19:42:44.813791073Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/62fa20be8510c822a3af68e19caac8c39efa6f456f35c096ab55d9be979a15a7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/62fa20be8510c822a3af68e19caac8c39efa6f456f35c096ab55d9be979a15a7/hostname",
	        "HostsPath": "/var/lib/docker/containers/62fa20be8510c822a3af68e19caac8c39efa6f456f35c096ab55d9be979a15a7/hosts",
	        "LogPath": "/var/lib/docker/containers/62fa20be8510c822a3af68e19caac8c39efa6f456f35c096ab55d9be979a15a7/62fa20be8510c822a3af68e19caac8c39efa6f456f35c096ab55d9be979a15a7-json.log",
	        "Name": "/newest-cni-677710",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-677710:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-677710",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "62fa20be8510c822a3af68e19caac8c39efa6f456f35c096ab55d9be979a15a7",
	                "LowerDir": "/var/lib/docker/overlay2/cb56fe71dd86daf61eed2c8feacba9932a7ceba7713d274439236e8bf12ab0c5-init/diff:/var/lib/docker/overlay2/71b61ec94610a35f2d924dec358052d4c154c36b3fe219802f60246ca2dc7f45/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cb56fe71dd86daf61eed2c8feacba9932a7ceba7713d274439236e8bf12ab0c5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cb56fe71dd86daf61eed2c8feacba9932a7ceba7713d274439236e8bf12ab0c5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cb56fe71dd86daf61eed2c8feacba9932a7ceba7713d274439236e8bf12ab0c5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-677710",
	                "Source": "/var/lib/docker/volumes/newest-cni-677710/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-677710",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-677710",
	                "name.minikube.sigs.k8s.io": "newest-cni-677710",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "95885c2842549bdbef9832614fc4a83000820e3bdc2178f50d16d47481af6228",
	            "SandboxKey": "/var/run/docker/netns/95885c284254",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33470"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33471"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33474"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33472"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33473"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-677710": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d6:cc:97:e9:91:f5",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "de6d8a0af7735692f0f5ebcc3cc03e69c8662e213ca8fd268387cc9a0ddf92b8",
	                    "EndpointID": "67d640bb52a67e909b2d2906009f7a8a0cd1507c951892926b68c9e90e85f1fa",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-677710",
	                        "62fa20be8510"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-677710 -n newest-cni-677710
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-677710 -n newest-cni-677710: exit status 2 (411.315753ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-677710 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-677710 logs -n 25: (1.247566764s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p default-k8s-diff-port-813397 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-813397 │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:42 UTC │
	│ addons  │ enable dashboard -p no-preload-095885 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ start   │ -p no-preload-095885 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:42 UTC │
	│ image   │ embed-certs-919237 image list --format=json                                                                                                                                                                                                   │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │ 27 Oct 25 19:41 UTC │
	│ pause   │ -p embed-certs-919237 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:41 UTC │                     │
	│ delete  │ -p embed-certs-919237                                                                                                                                                                                                                         │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ delete  │ -p embed-certs-919237                                                                                                                                                                                                                         │ embed-certs-919237           │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ start   │ -p newest-cni-677710 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-677710            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-813397 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-813397 │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-813397 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-813397 │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-813397 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-813397 │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ start   │ -p default-k8s-diff-port-813397 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-813397 │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	│ image   │ no-preload-095885 image list --format=json                                                                                                                                                                                                    │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ pause   │ -p no-preload-095885 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-677710 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-677710            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	│ stop    │ -p newest-cni-677710 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-677710            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ delete  │ -p no-preload-095885                                                                                                                                                                                                                          │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ delete  │ -p no-preload-095885                                                                                                                                                                                                                          │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ start   │ -p auto-387383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-387383                  │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-677710 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-677710            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ start   │ -p newest-cni-677710 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-677710            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ start   │ -p kubernetes-upgrade-360986 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-360986    │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	│ image   │ newest-cni-677710 image list --format=json                                                                                                                                                                                                    │ newest-cni-677710            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ start   │ -p kubernetes-upgrade-360986 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-360986    │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	│ pause   │ -p newest-cni-677710 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-677710            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 19:42:59
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 19:42:59.326467  627244 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:42:59.326737  627244 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:42:59.326748  627244 out.go:374] Setting ErrFile to fd 2...
	I1027 19:42:59.326752  627244 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:42:59.326933  627244 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 19:42:59.327391  627244 out.go:368] Setting JSON to false
	I1027 19:42:59.328804  627244 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8728,"bootTime":1761585451,"procs":443,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 19:42:59.328889  627244 start.go:141] virtualization: kvm guest
	I1027 19:42:59.331041  627244 out.go:179] * [kubernetes-upgrade-360986] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 19:42:59.332118  627244 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 19:42:59.332110  627244 notify.go:220] Checking for updates...
	I1027 19:42:59.334362  627244 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 19:42:59.335707  627244 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:42:59.336856  627244 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-352833/.minikube
	I1027 19:42:59.337909  627244 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 19:42:59.338956  627244 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 19:42:59.340627  627244 config.go:182] Loaded profile config "kubernetes-upgrade-360986": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:42:59.341384  627244 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 19:42:59.372269  627244 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 19:42:59.372461  627244 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:42:59.450648  627244 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-27 19:42:59.440108546 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:42:59.450765  627244 docker.go:318] overlay module found
	I1027 19:42:59.452587  627244 out.go:179] * Using the docker driver based on existing profile
	I1027 19:42:59.453958  627244 start.go:305] selected driver: docker
	I1027 19:42:59.453983  627244 start.go:925] validating driver "docker" against &{Name:kubernetes-upgrade-360986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-360986 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:42:59.454091  627244 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 19:42:59.454814  627244 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:42:59.521437  627244 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-27 19:42:59.508704904 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:42:59.521931  627244 cni.go:84] Creating CNI manager for ""
	I1027 19:42:59.522017  627244 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 19:42:59.522067  627244 start.go:349] cluster config:
	{Name:kubernetes-upgrade-360986 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-360986 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAge
ntPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:42:59.524410  627244 out.go:179] * Starting "kubernetes-upgrade-360986" primary control-plane node in "kubernetes-upgrade-360986" cluster
	I1027 19:42:59.525658  627244 cache.go:123] Beginning downloading kic base image for docker with crio
	I1027 19:42:59.527002  627244 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 19:42:59.528409  627244 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:42:59.528467  627244 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1027 19:42:59.528486  627244 cache.go:58] Caching tarball of preloaded images
	I1027 19:42:59.528542  627244 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 19:42:59.528608  627244 preload.go:233] Found /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1027 19:42:59.528624  627244 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 19:42:59.528735  627244 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kubernetes-upgrade-360986/config.json ...
	I1027 19:42:59.554899  627244 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 19:42:59.554924  627244 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 19:42:59.554945  627244 cache.go:232] Successfully downloaded all kic artifacts
	I1027 19:42:59.554975  627244 start.go:360] acquireMachinesLock for kubernetes-upgrade-360986: {Name:mkf8f27ced9f308ced512f76c3a6bd5971edf40f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:42:59.555049  627244 start.go:364] duration metric: took 48.863µs to acquireMachinesLock for "kubernetes-upgrade-360986"
	I1027 19:42:59.555074  627244 start.go:96] Skipping create...Using existing machine configuration
	I1027 19:42:59.555081  627244 fix.go:54] fixHost starting: 
	I1027 19:42:59.555388  627244 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-360986 --format={{.State.Status}}
	I1027 19:42:59.577702  627244 fix.go:112] recreateIfNeeded on kubernetes-upgrade-360986: state=Running err=<nil>
	W1027 19:42:59.577741  627244 fix.go:138] unexpected machine state, will restart: <nil>
	W1027 19:42:59.419824  616341 pod_ready.go:104] pod "coredns-66bc5c9577-d2trp" is not "Ready", error: <nil>
	W1027 19:43:01.924820  616341 pod_ready.go:104] pod "coredns-66bc5c9577-d2trp" is not "Ready", error: <nil>
	I1027 19:42:59.579400  627244 out.go:252] * Updating the running docker "kubernetes-upgrade-360986" container ...
	I1027 19:42:59.579437  627244 machine.go:93] provisionDockerMachine start ...
	I1027 19:42:59.579519  627244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-360986
	I1027 19:42:59.604411  627244 main.go:141] libmachine: Using SSH client type: native
	I1027 19:42:59.604785  627244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33410 <nil> <nil>}
	I1027 19:42:59.604801  627244 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 19:42:59.765877  627244 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-360986
	
	I1027 19:42:59.765928  627244 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-360986"
	I1027 19:42:59.766013  627244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-360986
	I1027 19:42:59.787786  627244 main.go:141] libmachine: Using SSH client type: native
	I1027 19:42:59.788067  627244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33410 <nil> <nil>}
	I1027 19:42:59.788082  627244 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-360986 && echo "kubernetes-upgrade-360986" | sudo tee /etc/hostname
	I1027 19:42:59.950833  627244 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-360986
	
	I1027 19:42:59.950916  627244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-360986
	I1027 19:42:59.971702  627244 main.go:141] libmachine: Using SSH client type: native
	I1027 19:42:59.971938  627244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33410 <nil> <nil>}
	I1027 19:42:59.971958  627244 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-360986' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-360986/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-360986' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 19:43:00.122633  627244 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 19:43:00.122669  627244 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21801-352833/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-352833/.minikube}
	I1027 19:43:00.122710  627244 ubuntu.go:190] setting up certificates
	I1027 19:43:00.122733  627244 provision.go:84] configureAuth start
	I1027 19:43:00.122797  627244 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-360986
	I1027 19:43:00.142941  627244 provision.go:143] copyHostCerts
	I1027 19:43:00.143028  627244 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem, removing ...
	I1027 19:43:00.143046  627244 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem
	I1027 19:43:00.143115  627244 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem (1078 bytes)
	I1027 19:43:00.143259  627244 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem, removing ...
	I1027 19:43:00.143272  627244 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem
	I1027 19:43:00.143302  627244 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem (1123 bytes)
	I1027 19:43:00.143383  627244 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem, removing ...
	I1027 19:43:00.143392  627244 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem
	I1027 19:43:00.143418  627244 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem (1679 bytes)
	I1027 19:43:00.143491  627244 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-360986 san=[127.0.0.1 192.168.103.2 kubernetes-upgrade-360986 localhost minikube]
	I1027 19:43:00.464265  627244 provision.go:177] copyRemoteCerts
	I1027 19:43:00.464328  627244 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 19:43:00.464367  627244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-360986
	I1027 19:43:00.486881  627244 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/kubernetes-upgrade-360986/id_rsa Username:docker}
	I1027 19:43:00.594704  627244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 19:43:00.615867  627244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 19:43:00.637159  627244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1027 19:43:00.656440  627244 provision.go:87] duration metric: took 533.690253ms to configureAuth
	I1027 19:43:00.656479  627244 ubuntu.go:206] setting minikube options for container-runtime
	I1027 19:43:00.656678  627244 config.go:182] Loaded profile config "kubernetes-upgrade-360986": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:43:00.656801  627244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-360986
	I1027 19:43:00.676596  627244 main.go:141] libmachine: Using SSH client type: native
	I1027 19:43:00.676894  627244 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33410 <nil> <nil>}
	I1027 19:43:00.676924  627244 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 19:43:01.569923  627244 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 19:43:01.569953  627244 machine.go:96] duration metric: took 1.990506703s to provisionDockerMachine
	I1027 19:43:01.569985  627244 start.go:293] postStartSetup for "kubernetes-upgrade-360986" (driver="docker")
	I1027 19:43:01.569999  627244 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 19:43:01.570072  627244 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 19:43:01.570127  627244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-360986
	I1027 19:43:01.593728  627244 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/kubernetes-upgrade-360986/id_rsa Username:docker}
	I1027 19:43:01.700935  627244 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 19:43:01.707551  627244 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 19:43:01.707593  627244 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 19:43:01.707608  627244 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-352833/.minikube/addons for local assets ...
	I1027 19:43:01.707678  627244 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-352833/.minikube/files for local assets ...
	I1027 19:43:01.707788  627244 filesync.go:149] local asset: /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem -> 3564152.pem in /etc/ssl/certs
	I1027 19:43:01.707936  627244 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 19:43:01.718358  627244 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem --> /etc/ssl/certs/3564152.pem (1708 bytes)
	I1027 19:43:01.739670  627244 start.go:296] duration metric: took 169.666644ms for postStartSetup
	I1027 19:43:01.739759  627244 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 19:43:01.739813  627244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-360986
	I1027 19:43:01.762237  627244 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/kubernetes-upgrade-360986/id_rsa Username:docker}
	I1027 19:43:01.868119  627244 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 19:43:01.874256  627244 fix.go:56] duration metric: took 2.319167622s for fixHost
	I1027 19:43:01.874293  627244 start.go:83] releasing machines lock for "kubernetes-upgrade-360986", held for 2.319229618s
	I1027 19:43:01.874365  627244 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-360986
	I1027 19:43:01.898721  627244 ssh_runner.go:195] Run: cat /version.json
	I1027 19:43:01.898784  627244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-360986
	I1027 19:43:01.898804  627244 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 19:43:01.898883  627244 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-360986
	I1027 19:43:01.927534  627244 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/kubernetes-upgrade-360986/id_rsa Username:docker}
	I1027 19:43:01.930654  627244 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/kubernetes-upgrade-360986/id_rsa Username:docker}
	I1027 19:43:02.089559  627244 ssh_runner.go:195] Run: systemctl --version
	I1027 19:43:02.097331  627244 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 19:43:02.143850  627244 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 19:43:02.148997  627244 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 19:43:02.149079  627244 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 19:43:02.157982  627244 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1027 19:43:02.158009  627244 start.go:495] detecting cgroup driver to use...
	I1027 19:43:02.158048  627244 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 19:43:02.158098  627244 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 19:43:02.177265  627244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 19:43:02.193291  627244 docker.go:218] disabling cri-docker service (if available) ...
	I1027 19:43:02.193368  627244 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 19:43:02.211905  627244 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 19:43:02.228831  627244 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 19:43:02.374228  627244 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 19:43:02.504966  627244 docker.go:234] disabling docker service ...
	I1027 19:43:02.505043  627244 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 19:43:02.523052  627244 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 19:43:02.540081  627244 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 19:43:02.664476  627244 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 19:43:02.808634  627244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 19:43:02.831649  627244 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 19:43:02.862127  627244 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 19:43:02.862267  627244 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:43:02.878942  627244 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 19:43:02.879088  627244 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:43:02.899657  627244 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:43:02.918703  627244 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:43:02.936073  627244 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 19:43:02.948555  627244 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:43:02.962554  627244 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:43:02.975318  627244 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:43:02.986827  627244 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 19:43:02.998831  627244 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 19:43:03.013307  627244 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:43:03.171841  627244 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 19:43:03.384015  627244 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 19:43:03.384105  627244 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 19:43:03.390192  627244 start.go:563] Will wait 60s for crictl version
	I1027 19:43:03.390272  627244 ssh_runner.go:195] Run: which crictl
	I1027 19:43:03.395679  627244 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 19:43:03.432747  627244 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 19:43:03.432828  627244 ssh_runner.go:195] Run: crio --version
	I1027 19:43:03.482383  627244 ssh_runner.go:195] Run: crio --version
	I1027 19:43:03.527630  627244 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	
	==> CRI-O <==
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.007459313Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.011995812Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=ec6683b4-a830-487e-b027-1992b92a4851 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.013810264Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=2b1b7428-f72f-478a-9f72-9ea2e344429b name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.015051876Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.015769736Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.016027947Z" level=info msg="Ran pod sandbox 44a3d69e30e259441eb11a1733586eabfb1cda95d794a3e80b737cbf35c598e2 with infra container: kube-system/kube-proxy-zg8ds/POD" id=ec6683b4-a830-487e-b027-1992b92a4851 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.016652394Z" level=info msg="Ran pod sandbox d41433b7b4244e4fcc60f6aa5115561ee4e19f2e8bb290e903e2d494d7edef84 with infra container: kube-system/kindnet-w6m47/POD" id=2b1b7428-f72f-478a-9f72-9ea2e344429b name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.017505226Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=ed145495-c88e-468f-bee4-33245b4933c9 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.017716621Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=509388e4-762b-4e95-911f-5fde88445a72 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.018822249Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=1bb7b900-1e0c-40c1-975e-1233b2cfae5d name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.019914719Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=d94ecd65-5b80-41ba-a57e-027914d6c2f5 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.02009217Z" level=info msg="Creating container: kube-system/kube-proxy-zg8ds/kube-proxy" id=4653076f-3af6-4556-ad23-983ee8e6e04c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.020267429Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.021071851Z" level=info msg="Creating container: kube-system/kindnet-w6m47/kindnet-cni" id=962d7982-8851-4012-9513-f02c2f082338 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.021214114Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.027109233Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.027776371Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.029167224Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.029845076Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.064819577Z" level=info msg="Created container 147a5a9fd3a0b246651bfeafac4f2aa71fa51126a3a0b5650e6b7fee3004479f: kube-system/kindnet-w6m47/kindnet-cni" id=962d7982-8851-4012-9513-f02c2f082338 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.065824938Z" level=info msg="Starting container: 147a5a9fd3a0b246651bfeafac4f2aa71fa51126a3a0b5650e6b7fee3004479f" id=de439795-0464-468f-a2fc-c5b1a8a7cab5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.069756736Z" level=info msg="Created container f2220f3ce73097ad9f9c01a9488587842fecbdfe180097b0e67b5c96c7b7cb68: kube-system/kube-proxy-zg8ds/kube-proxy" id=4653076f-3af6-4556-ad23-983ee8e6e04c name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.070633736Z" level=info msg="Starting container: f2220f3ce73097ad9f9c01a9488587842fecbdfe180097b0e67b5c96c7b7cb68" id=c05a4a3e-ec2a-4da8-a41a-b5ba9e87fe4c name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.071499819Z" level=info msg="Started container" PID=1035 containerID=147a5a9fd3a0b246651bfeafac4f2aa71fa51126a3a0b5650e6b7fee3004479f description=kube-system/kindnet-w6m47/kindnet-cni id=de439795-0464-468f-a2fc-c5b1a8a7cab5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d41433b7b4244e4fcc60f6aa5115561ee4e19f2e8bb290e903e2d494d7edef84
	Oct 27 19:42:58 newest-cni-677710 crio[518]: time="2025-10-27T19:42:58.074725104Z" level=info msg="Started container" PID=1034 containerID=f2220f3ce73097ad9f9c01a9488587842fecbdfe180097b0e67b5c96c7b7cb68 description=kube-system/kube-proxy-zg8ds/kube-proxy id=c05a4a3e-ec2a-4da8-a41a-b5ba9e87fe4c name=/runtime.v1.RuntimeService/StartContainer sandboxID=44a3d69e30e259441eb11a1733586eabfb1cda95d794a3e80b737cbf35c598e2
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	147a5a9fd3a0b       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 seconds ago       Running             kindnet-cni               1                   d41433b7b4244       kindnet-w6m47                               kube-system
	f2220f3ce7309       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   6 seconds ago       Running             kube-proxy                1                   44a3d69e30e25       kube-proxy-zg8ds                            kube-system
	44a615e284a4d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   10 seconds ago      Running             kube-apiserver            1                   65704888bea48       kube-apiserver-newest-cni-677710            kube-system
	41f7a785712cd       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   10 seconds ago      Running             etcd                      1                   2dbe909ae20fc       etcd-newest-cni-677710                      kube-system
	73167847edee5       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   10 seconds ago      Running             kube-controller-manager   1                   11f985a2eee8b       kube-controller-manager-newest-cni-677710   kube-system
	cece14cf0a526       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   10 seconds ago      Running             kube-scheduler            1                   f781a9623a17b       kube-scheduler-newest-cni-677710            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-677710
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-677710
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=newest-cni-677710
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T19_42_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 19:42:26 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-677710
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 19:42:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 19:42:57 +0000   Mon, 27 Oct 2025 19:42:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 19:42:57 +0000   Mon, 27 Oct 2025 19:42:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 19:42:57 +0000   Mon, 27 Oct 2025 19:42:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 27 Oct 2025 19:42:57 +0000   Mon, 27 Oct 2025 19:42:24 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-677710
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                0e5b8dde-ff0d-4017-8bb0-5ec4905459bd
	  Boot ID:                    811bd29c-e64e-4acc-9427-bab1f7caed93
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-677710                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         36s
	  kube-system                 kindnet-w6m47                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-newest-cni-677710             250m (3%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-newest-cni-677710    200m (2%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-zg8ds                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-newest-cni-677710             100m (1%)     0 (0%)      0 (0%)           0 (0%)         36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 30s                kube-proxy       
	  Normal  Starting                 6s                 kube-proxy       
	  Normal  Starting                 36s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s                kubelet          Node newest-cni-677710 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s                kubelet          Node newest-cni-677710 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s                kubelet          Node newest-cni-677710 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           31s                node-controller  Node newest-cni-677710 event: Registered Node newest-cni-677710 in Controller
	  Normal  Starting                 11s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11s (x8 over 11s)  kubelet          Node newest-cni-677710 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11s (x8 over 11s)  kubelet          Node newest-cni-677710 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11s (x8 over 11s)  kubelet          Node newest-cni-677710 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4s                 node-controller  Node newest-cni-677710 event: Registered Node newest-cni-677710 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 23 52 43 9a ba 08 06
	[  +0.000398] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 50 95 0e df 53 08 06
	[Oct27 18:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.017295] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023893] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023882] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023851] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +2.047849] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +4.031592] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +8.319143] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[ +16.382183] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[Oct27 19:00] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	
	
	==> etcd [41f7a785712cd25ed1b323d20fb0cdc81e6e1275a58915470cc7154bf52a2176] <==
	{"level":"warn","ts":"2025-10-27T19:42:56.123959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.135353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.157713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.172698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.187290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.196406Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.205711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.217762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.222039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.234217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.252833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.259522Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.270208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.280609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.297125Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.320060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.326395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.335589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.356968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51344","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.369742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.385637Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.390946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.399376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.408713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:56.491004Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51452","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 19:43:05 up  2:25,  0 user,  load average: 7.36, 4.25, 2.54
	Linux newest-cni-677710 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [147a5a9fd3a0b246651bfeafac4f2aa71fa51126a3a0b5650e6b7fee3004479f] <==
	I1027 19:42:58.248668       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 19:42:58.249107       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1027 19:42:58.249276       1 main.go:148] setting mtu 1500 for CNI 
	I1027 19:42:58.249294       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 19:42:58.249313       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T19:42:58Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 19:42:58.556744       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 19:42:58.556773       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 19:42:58.556784       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 19:42:58.643471       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1027 19:42:58.856896       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 19:42:58.856964       1 metrics.go:72] Registering metrics
	I1027 19:42:58.857082       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [44a615e284a4d0c1a9cd591789628eef5abb12b0322cda33a9c30c087dbfcc6c] <==
	I1027 19:42:57.100584       1 aggregator.go:171] initial CRD sync complete...
	I1027 19:42:57.100595       1 autoregister_controller.go:144] Starting autoregister controller
	I1027 19:42:57.100603       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 19:42:57.100610       1 cache.go:39] Caches are synced for autoregister controller
	I1027 19:42:57.101066       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1027 19:42:57.101287       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1027 19:42:57.102214       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1027 19:42:57.104598       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1027 19:42:57.107797       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1027 19:42:57.107824       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1027 19:42:57.129338       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 19:42:57.132782       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 19:42:57.133370       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1027 19:42:57.133432       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 19:42:57.503271       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 19:42:57.537249       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 19:42:57.560385       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 19:42:57.570708       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 19:42:57.578224       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 19:42:57.618927       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.109.175"}
	I1027 19:42:57.630683       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.139.166"}
	I1027 19:42:57.999646       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 19:43:00.767437       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 19:43:00.817401       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 19:43:00.917050       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [73167847edee577234af246f7849876030d523eb4d523a4c2b5bbd0694b79ad5] <==
	I1027 19:43:00.390933       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 19:43:00.390963       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1027 19:43:00.392061       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1027 19:43:00.393406       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 19:43:00.396422       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1027 19:43:00.399107       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1027 19:43:00.413110       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 19:43:00.413150       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1027 19:43:00.413159       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1027 19:43:00.413807       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1027 19:43:00.413907       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1027 19:43:00.414187       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 19:43:00.414191       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1027 19:43:00.414626       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1027 19:43:00.415036       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 19:43:00.415059       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1027 19:43:00.415077       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 19:43:00.421433       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 19:43:00.422597       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 19:43:00.426467       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 19:43:00.429631       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 19:43:00.433334       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1027 19:43:00.436631       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1027 19:43:00.441913       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1027 19:43:00.492407       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [f2220f3ce73097ad9f9c01a9488587842fecbdfe180097b0e67b5c96c7b7cb68] <==
	I1027 19:42:58.122634       1 server_linux.go:53] "Using iptables proxy"
	I1027 19:42:58.200564       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 19:42:58.301479       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 19:42:58.301522       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1027 19:42:58.301636       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 19:42:58.324903       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 19:42:58.324973       1 server_linux.go:132] "Using iptables Proxier"
	I1027 19:42:58.331152       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 19:42:58.331655       1 server.go:527] "Version info" version="v1.34.1"
	I1027 19:42:58.331702       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:42:58.333115       1 config.go:200] "Starting service config controller"
	I1027 19:42:58.333143       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 19:42:58.333199       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 19:42:58.333212       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 19:42:58.333240       1 config.go:106] "Starting endpoint slice config controller"
	I1027 19:42:58.333245       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 19:42:58.333312       1 config.go:309] "Starting node config controller"
	I1027 19:42:58.333323       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 19:42:58.433286       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 19:42:58.433313       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 19:42:58.433402       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1027 19:42:58.433568       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [cece14cf0a526d107de8be5cc2a837da1df540d883b98e8589946416af07067b] <==
	I1027 19:42:55.870990       1 serving.go:386] Generated self-signed cert in-memory
	I1027 19:42:57.069483       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 19:42:57.069517       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:42:57.077276       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1027 19:42:57.077534       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1027 19:42:57.078129       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 19:42:57.077417       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:42:57.078249       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1027 19:42:57.078255       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:42:57.077454       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 19:42:57.078448       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 19:42:57.178833       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1027 19:42:57.178924       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1027 19:42:57.184912       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 19:42:55 newest-cni-677710 kubelet[661]: E1027 19:42:55.814194     661 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-677710\" not found" node="newest-cni-677710"
	Oct 27 19:42:55 newest-cni-677710 kubelet[661]: E1027 19:42:55.816241     661 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-677710\" not found" node="newest-cni-677710"
	Oct 27 19:42:55 newest-cni-677710 kubelet[661]: E1027 19:42:55.817112     661 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-677710\" not found" node="newest-cni-677710"
	Oct 27 19:42:57 newest-cni-677710 kubelet[661]: I1027 19:42:57.102331     661 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-677710"
	Oct 27 19:42:57 newest-cni-677710 kubelet[661]: I1027 19:42:57.150621     661 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-677710"
	Oct 27 19:42:57 newest-cni-677710 kubelet[661]: I1027 19:42:57.150739     661 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-677710"
	Oct 27 19:42:57 newest-cni-677710 kubelet[661]: I1027 19:42:57.150779     661 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 27 19:42:57 newest-cni-677710 kubelet[661]: I1027 19:42:57.151675     661 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 27 19:42:57 newest-cni-677710 kubelet[661]: E1027 19:42:57.161106     661 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-677710\" already exists" pod="kube-system/kube-scheduler-newest-cni-677710"
	Oct 27 19:42:57 newest-cni-677710 kubelet[661]: I1027 19:42:57.161510     661 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-677710"
	Oct 27 19:42:57 newest-cni-677710 kubelet[661]: E1027 19:42:57.180626     661 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-677710\" already exists" pod="kube-system/etcd-newest-cni-677710"
	Oct 27 19:42:57 newest-cni-677710 kubelet[661]: I1027 19:42:57.180674     661 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-677710"
	Oct 27 19:42:57 newest-cni-677710 kubelet[661]: E1027 19:42:57.192422     661 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-677710\" already exists" pod="kube-system/kube-apiserver-newest-cni-677710"
	Oct 27 19:42:57 newest-cni-677710 kubelet[661]: I1027 19:42:57.192645     661 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-677710"
	Oct 27 19:42:57 newest-cni-677710 kubelet[661]: E1027 19:42:57.200794     661 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-677710\" already exists" pod="kube-system/kube-controller-manager-newest-cni-677710"
	Oct 27 19:42:57 newest-cni-677710 kubelet[661]: I1027 19:42:57.699111     661 apiserver.go:52] "Watching apiserver"
	Oct 27 19:42:57 newest-cni-677710 kubelet[661]: I1027 19:42:57.801619     661 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 27 19:42:57 newest-cni-677710 kubelet[661]: I1027 19:42:57.827595     661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1b6e2a6-b271-4a01-8cfe-c10f73bd2f4d-xtables-lock\") pod \"kindnet-w6m47\" (UID: \"e1b6e2a6-b271-4a01-8cfe-c10f73bd2f4d\") " pod="kube-system/kindnet-w6m47"
	Oct 27 19:42:57 newest-cni-677710 kubelet[661]: I1027 19:42:57.827674     661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89658cd8-0d1d-4a33-a913-add5cbd50df0-lib-modules\") pod \"kube-proxy-zg8ds\" (UID: \"89658cd8-0d1d-4a33-a913-add5cbd50df0\") " pod="kube-system/kube-proxy-zg8ds"
	Oct 27 19:42:57 newest-cni-677710 kubelet[661]: I1027 19:42:57.827723     661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89658cd8-0d1d-4a33-a913-add5cbd50df0-xtables-lock\") pod \"kube-proxy-zg8ds\" (UID: \"89658cd8-0d1d-4a33-a913-add5cbd50df0\") " pod="kube-system/kube-proxy-zg8ds"
	Oct 27 19:42:57 newest-cni-677710 kubelet[661]: I1027 19:42:57.827747     661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e1b6e2a6-b271-4a01-8cfe-c10f73bd2f4d-cni-cfg\") pod \"kindnet-w6m47\" (UID: \"e1b6e2a6-b271-4a01-8cfe-c10f73bd2f4d\") " pod="kube-system/kindnet-w6m47"
	Oct 27 19:42:57 newest-cni-677710 kubelet[661]: I1027 19:42:57.827777     661 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1b6e2a6-b271-4a01-8cfe-c10f73bd2f4d-lib-modules\") pod \"kindnet-w6m47\" (UID: \"e1b6e2a6-b271-4a01-8cfe-c10f73bd2f4d\") " pod="kube-system/kindnet-w6m47"
	Oct 27 19:42:59 newest-cni-677710 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 19:42:59 newest-cni-677710 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 19:42:59 newest-cni-677710 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-677710 -n newest-cni-677710
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-677710 -n newest-cni-677710: exit status 2 (388.107067ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-677710 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-rv72d storage-provisioner dashboard-metrics-scraper-6ffb444bf9-rh7rf kubernetes-dashboard-855c9754f9-lflsk
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-677710 describe pod coredns-66bc5c9577-rv72d storage-provisioner dashboard-metrics-scraper-6ffb444bf9-rh7rf kubernetes-dashboard-855c9754f9-lflsk
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-677710 describe pod coredns-66bc5c9577-rv72d storage-provisioner dashboard-metrics-scraper-6ffb444bf9-rh7rf kubernetes-dashboard-855c9754f9-lflsk: exit status 1 (70.895747ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-rv72d" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-rh7rf" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-lflsk" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-677710 describe pod coredns-66bc5c9577-rv72d storage-provisioner dashboard-metrics-scraper-6ffb444bf9-rh7rf kubernetes-dashboard-855c9754f9-lflsk: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (7.64s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-813397 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-813397 --alsologtostderr -v=1: exit status 80 (2.60911624s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-813397 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:43:35.710757  636282 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:43:35.710891  636282 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:43:35.710903  636282 out.go:374] Setting ErrFile to fd 2...
	I1027 19:43:35.710907  636282 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:43:35.711155  636282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 19:43:35.711513  636282 out.go:368] Setting JSON to false
	I1027 19:43:35.711554  636282 mustload.go:65] Loading cluster: default-k8s-diff-port-813397
	I1027 19:43:35.712749  636282 config.go:182] Loaded profile config "default-k8s-diff-port-813397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:43:35.714160  636282 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-813397 --format={{.State.Status}}
	I1027 19:43:35.734415  636282 host.go:66] Checking if "default-k8s-diff-port-813397" exists ...
	I1027 19:43:35.734689  636282 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:43:35.799444  636282 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:82 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-27 19:43:35.786951021 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:43:35.800215  636282 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-813397 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1027 19:43:35.802227  636282 out.go:179] * Pausing node default-k8s-diff-port-813397 ... 
	I1027 19:43:35.803745  636282 host.go:66] Checking if "default-k8s-diff-port-813397" exists ...
	I1027 19:43:35.804025  636282 ssh_runner.go:195] Run: systemctl --version
	I1027 19:43:35.804068  636282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-813397
	I1027 19:43:35.823710  636282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33465 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/default-k8s-diff-port-813397/id_rsa Username:docker}
	I1027 19:43:35.929552  636282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:43:35.953106  636282 pause.go:52] kubelet running: true
	I1027 19:43:35.953205  636282 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 19:43:36.130762  636282 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 19:43:36.130871  636282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 19:43:36.209110  636282 cri.go:89] found id: "aac23a7766ba54465e8372369b0736fdbf5d9242a8ef9f2ac26eedc0aad943f4"
	I1027 19:43:36.209161  636282 cri.go:89] found id: "6352f76b57f5e0e0deff0e7dcd3aff94c185f37edfe63b6b2f233017bcc7468d"
	I1027 19:43:36.209171  636282 cri.go:89] found id: "7c615af71a1328ed761f08f1b576963f0b4af669a2d38d4c04dcbc67befffac1"
	I1027 19:43:36.209177  636282 cri.go:89] found id: "a99b69df126644d4ba34b740a14a250d74ff8e1c6a80b438411dfe1669fada08"
	I1027 19:43:36.209181  636282 cri.go:89] found id: "2ad23fa6ba06688254490ad382551b5850d3c01b455056ac3570cd76e67f3b13"
	I1027 19:43:36.209186  636282 cri.go:89] found id: "d6d42a747447887cf7cfddbb910c2d92aff06ed6741847fd2f5efa19ba0e6533"
	I1027 19:43:36.209190  636282 cri.go:89] found id: "0ef2559af1f1081ff5b055e5ba9d447a5c678b0a1ce12c6cb5f29cf71d5078e4"
	I1027 19:43:36.209194  636282 cri.go:89] found id: "9780797653aab1b99e5b8a7975532cff7b3a72af97330b8012e4e50b4dadbfde"
	I1027 19:43:36.209198  636282 cri.go:89] found id: "71bc91522e0a38092dcf74ebe27051d01aa77c65b02d1f845740c5a57c74c29b"
	I1027 19:43:36.209208  636282 cri.go:89] found id: "73ec8a85e99a5706793ba06e7c17f5889883af7a6fba00f94e2367ec548fda2f"
	I1027 19:43:36.209212  636282 cri.go:89] found id: "e3cb093a1aa0f1c554cd5ee66a4a34809e2ef72e9a8a48c1a6c6e48763472af4"
	I1027 19:43:36.209216  636282 cri.go:89] found id: ""
	I1027 19:43:36.209272  636282 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:43:36.223018  636282 retry.go:31] will retry after 200.183392ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:43:36Z" level=error msg="open /run/runc: no such file or directory"
	I1027 19:43:36.423459  636282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:43:36.437873  636282 pause.go:52] kubelet running: false
	I1027 19:43:36.437930  636282 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 19:43:36.616261  636282 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 19:43:36.616367  636282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 19:43:36.702057  636282 cri.go:89] found id: "aac23a7766ba54465e8372369b0736fdbf5d9242a8ef9f2ac26eedc0aad943f4"
	I1027 19:43:36.702086  636282 cri.go:89] found id: "6352f76b57f5e0e0deff0e7dcd3aff94c185f37edfe63b6b2f233017bcc7468d"
	I1027 19:43:36.702092  636282 cri.go:89] found id: "7c615af71a1328ed761f08f1b576963f0b4af669a2d38d4c04dcbc67befffac1"
	I1027 19:43:36.702096  636282 cri.go:89] found id: "a99b69df126644d4ba34b740a14a250d74ff8e1c6a80b438411dfe1669fada08"
	I1027 19:43:36.702100  636282 cri.go:89] found id: "2ad23fa6ba06688254490ad382551b5850d3c01b455056ac3570cd76e67f3b13"
	I1027 19:43:36.702104  636282 cri.go:89] found id: "d6d42a747447887cf7cfddbb910c2d92aff06ed6741847fd2f5efa19ba0e6533"
	I1027 19:43:36.702108  636282 cri.go:89] found id: "0ef2559af1f1081ff5b055e5ba9d447a5c678b0a1ce12c6cb5f29cf71d5078e4"
	I1027 19:43:36.702111  636282 cri.go:89] found id: "9780797653aab1b99e5b8a7975532cff7b3a72af97330b8012e4e50b4dadbfde"
	I1027 19:43:36.702128  636282 cri.go:89] found id: "71bc91522e0a38092dcf74ebe27051d01aa77c65b02d1f845740c5a57c74c29b"
	I1027 19:43:36.702171  636282 cri.go:89] found id: "73ec8a85e99a5706793ba06e7c17f5889883af7a6fba00f94e2367ec548fda2f"
	I1027 19:43:36.702180  636282 cri.go:89] found id: "e3cb093a1aa0f1c554cd5ee66a4a34809e2ef72e9a8a48c1a6c6e48763472af4"
	I1027 19:43:36.702184  636282 cri.go:89] found id: ""
	I1027 19:43:36.702235  636282 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:43:36.717628  636282 retry.go:31] will retry after 362.702506ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:43:36Z" level=error msg="open /run/runc: no such file or directory"
	I1027 19:43:37.081358  636282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:43:37.097129  636282 pause.go:52] kubelet running: false
	I1027 19:43:37.097235  636282 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 19:43:37.259026  636282 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 19:43:37.259120  636282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 19:43:37.335072  636282 cri.go:89] found id: "aac23a7766ba54465e8372369b0736fdbf5d9242a8ef9f2ac26eedc0aad943f4"
	I1027 19:43:37.335106  636282 cri.go:89] found id: "6352f76b57f5e0e0deff0e7dcd3aff94c185f37edfe63b6b2f233017bcc7468d"
	I1027 19:43:37.335112  636282 cri.go:89] found id: "7c615af71a1328ed761f08f1b576963f0b4af669a2d38d4c04dcbc67befffac1"
	I1027 19:43:37.335117  636282 cri.go:89] found id: "a99b69df126644d4ba34b740a14a250d74ff8e1c6a80b438411dfe1669fada08"
	I1027 19:43:37.335121  636282 cri.go:89] found id: "2ad23fa6ba06688254490ad382551b5850d3c01b455056ac3570cd76e67f3b13"
	I1027 19:43:37.335172  636282 cri.go:89] found id: "d6d42a747447887cf7cfddbb910c2d92aff06ed6741847fd2f5efa19ba0e6533"
	I1027 19:43:37.335179  636282 cri.go:89] found id: "0ef2559af1f1081ff5b055e5ba9d447a5c678b0a1ce12c6cb5f29cf71d5078e4"
	I1027 19:43:37.335184  636282 cri.go:89] found id: "9780797653aab1b99e5b8a7975532cff7b3a72af97330b8012e4e50b4dadbfde"
	I1027 19:43:37.335188  636282 cri.go:89] found id: "71bc91522e0a38092dcf74ebe27051d01aa77c65b02d1f845740c5a57c74c29b"
	I1027 19:43:37.335199  636282 cri.go:89] found id: "73ec8a85e99a5706793ba06e7c17f5889883af7a6fba00f94e2367ec548fda2f"
	I1027 19:43:37.335205  636282 cri.go:89] found id: "e3cb093a1aa0f1c554cd5ee66a4a34809e2ef72e9a8a48c1a6c6e48763472af4"
	I1027 19:43:37.335208  636282 cri.go:89] found id: ""
	I1027 19:43:37.335299  636282 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:43:37.351208  636282 retry.go:31] will retry after 557.232627ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:43:37Z" level=error msg="open /run/runc: no such file or directory"
	I1027 19:43:37.909533  636282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:43:37.926538  636282 pause.go:52] kubelet running: false
	I1027 19:43:37.926607  636282 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1027 19:43:38.125508  636282 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1027 19:43:38.125605  636282 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1027 19:43:38.214855  636282 cri.go:89] found id: "aac23a7766ba54465e8372369b0736fdbf5d9242a8ef9f2ac26eedc0aad943f4"
	I1027 19:43:38.214889  636282 cri.go:89] found id: "6352f76b57f5e0e0deff0e7dcd3aff94c185f37edfe63b6b2f233017bcc7468d"
	I1027 19:43:38.214894  636282 cri.go:89] found id: "7c615af71a1328ed761f08f1b576963f0b4af669a2d38d4c04dcbc67befffac1"
	I1027 19:43:38.214899  636282 cri.go:89] found id: "a99b69df126644d4ba34b740a14a250d74ff8e1c6a80b438411dfe1669fada08"
	I1027 19:43:38.214904  636282 cri.go:89] found id: "2ad23fa6ba06688254490ad382551b5850d3c01b455056ac3570cd76e67f3b13"
	I1027 19:43:38.214911  636282 cri.go:89] found id: "d6d42a747447887cf7cfddbb910c2d92aff06ed6741847fd2f5efa19ba0e6533"
	I1027 19:43:38.214915  636282 cri.go:89] found id: "0ef2559af1f1081ff5b055e5ba9d447a5c678b0a1ce12c6cb5f29cf71d5078e4"
	I1027 19:43:38.214918  636282 cri.go:89] found id: "9780797653aab1b99e5b8a7975532cff7b3a72af97330b8012e4e50b4dadbfde"
	I1027 19:43:38.214922  636282 cri.go:89] found id: "71bc91522e0a38092dcf74ebe27051d01aa77c65b02d1f845740c5a57c74c29b"
	I1027 19:43:38.214937  636282 cri.go:89] found id: "73ec8a85e99a5706793ba06e7c17f5889883af7a6fba00f94e2367ec548fda2f"
	I1027 19:43:38.214941  636282 cri.go:89] found id: "e3cb093a1aa0f1c554cd5ee66a4a34809e2ef72e9a8a48c1a6c6e48763472af4"
	I1027 19:43:38.214945  636282 cri.go:89] found id: ""
	I1027 19:43:38.215001  636282 ssh_runner.go:195] Run: sudo runc list -f json
	I1027 19:43:38.232689  636282 out.go:203] 
	W1027 19:43:38.234265  636282 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:43:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:43:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1027 19:43:38.234287  636282 out.go:285] * 
	* 
	W1027 19:43:38.240017  636282 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1027 19:43:38.244743  636282 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-813397 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-813397
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-813397:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e2892d7a5b76c311108e309f0c5e79b46c633c41881cd99a81040580e9d6de8",
	        "Created": "2025-10-27T19:41:28.530867062Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 616539,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T19:42:33.304221255Z",
	            "FinishedAt": "2025-10-27T19:42:32.338526273Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/5e2892d7a5b76c311108e309f0c5e79b46c633c41881cd99a81040580e9d6de8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e2892d7a5b76c311108e309f0c5e79b46c633c41881cd99a81040580e9d6de8/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e2892d7a5b76c311108e309f0c5e79b46c633c41881cd99a81040580e9d6de8/hosts",
	        "LogPath": "/var/lib/docker/containers/5e2892d7a5b76c311108e309f0c5e79b46c633c41881cd99a81040580e9d6de8/5e2892d7a5b76c311108e309f0c5e79b46c633c41881cd99a81040580e9d6de8-json.log",
	        "Name": "/default-k8s-diff-port-813397",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-813397:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-813397",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5e2892d7a5b76c311108e309f0c5e79b46c633c41881cd99a81040580e9d6de8",
	                "LowerDir": "/var/lib/docker/overlay2/9c29b2ca181e37783386969900349b6f8ee825583f284e5f7ca2046e8e79ccce-init/diff:/var/lib/docker/overlay2/71b61ec94610a35f2d924dec358052d4c154c36b3fe219802f60246ca2dc7f45/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9c29b2ca181e37783386969900349b6f8ee825583f284e5f7ca2046e8e79ccce/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9c29b2ca181e37783386969900349b6f8ee825583f284e5f7ca2046e8e79ccce/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9c29b2ca181e37783386969900349b6f8ee825583f284e5f7ca2046e8e79ccce/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-813397",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-813397/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-813397",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-813397",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-813397",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "567b0cbd081b2e0d2b2d47ab8f135996ad55d4b1699c1507ee06fc68e4766c6d",
	            "SandboxKey": "/var/run/docker/netns/567b0cbd081b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33466"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33469"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33467"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33468"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-813397": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0a:9a:ad:0c:6e:6e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e5c60f1f40aedba9b9761254cb4dc4ea11830e317d7c1ef05baf77a39a5733c7",
	                    "EndpointID": "9830f48004fd4be26a7e2a151d943b78fa6929c3fc664fdeb23e9dca31037e85",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-813397",
	                        "5e2892d7a5b7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-813397 -n default-k8s-diff-port-813397
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-813397 -n default-k8s-diff-port-813397: exit status 2 (380.604888ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-813397 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-813397 logs -n 25: (1.654159488s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p newest-cni-677710 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-677710            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-813397 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-813397 │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-813397 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-813397 │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-813397 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-813397 │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ start   │ -p default-k8s-diff-port-813397 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-813397 │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:43 UTC │
	│ image   │ no-preload-095885 image list --format=json                                                                                                                                                                                                    │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ pause   │ -p no-preload-095885 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-677710 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-677710            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	│ stop    │ -p newest-cni-677710 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-677710            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ delete  │ -p no-preload-095885                                                                                                                                                                                                                          │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ delete  │ -p no-preload-095885                                                                                                                                                                                                                          │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ start   │ -p auto-387383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-387383                  │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-677710 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-677710            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ start   │ -p newest-cni-677710 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-677710            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ start   │ -p kubernetes-upgrade-360986 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-360986    │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	│ image   │ newest-cni-677710 image list --format=json                                                                                                                                                                                                    │ newest-cni-677710            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ start   │ -p kubernetes-upgrade-360986 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-360986    │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:43 UTC │
	│ pause   │ -p newest-cni-677710 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-677710            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-360986                                                                                                                                                                                                                  │ kubernetes-upgrade-360986    │ jenkins │ v1.37.0 │ 27 Oct 25 19:43 UTC │ 27 Oct 25 19:43 UTC │
	│ delete  │ -p newest-cni-677710                                                                                                                                                                                                                          │ newest-cni-677710            │ jenkins │ v1.37.0 │ 27 Oct 25 19:43 UTC │ 27 Oct 25 19:43 UTC │
	│ start   │ -p kindnet-387383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                      │ kindnet-387383               │ jenkins │ v1.37.0 │ 27 Oct 25 19:43 UTC │                     │
	│ delete  │ -p newest-cni-677710                                                                                                                                                                                                                          │ newest-cni-677710            │ jenkins │ v1.37.0 │ 27 Oct 25 19:43 UTC │ 27 Oct 25 19:43 UTC │
	│ start   │ -p calico-387383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio                                                                                                        │ calico-387383                │ jenkins │ v1.37.0 │ 27 Oct 25 19:43 UTC │                     │
	│ image   │ default-k8s-diff-port-813397 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-813397 │ jenkins │ v1.37.0 │ 27 Oct 25 19:43 UTC │ 27 Oct 25 19:43 UTC │
	│ pause   │ -p default-k8s-diff-port-813397 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-813397 │ jenkins │ v1.37.0 │ 27 Oct 25 19:43 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 19:43:09
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 19:43:09.098655  631152 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:43:09.099062  631152 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:43:09.099083  631152 out.go:374] Setting ErrFile to fd 2...
	I1027 19:43:09.099092  631152 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:43:09.099941  631152 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 19:43:09.100698  631152 out.go:368] Setting JSON to false
	I1027 19:43:09.102496  631152 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8738,"bootTime":1761585451,"procs":412,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 19:43:09.102649  631152 start.go:141] virtualization: kvm guest
	I1027 19:43:09.105368  631152 out.go:179] * [calico-387383] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 19:43:09.106947  631152 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 19:43:09.106951  631152 notify.go:220] Checking for updates...
	I1027 19:43:09.108399  631152 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 19:43:09.109941  631152 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:43:09.111708  631152 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-352833/.minikube
	I1027 19:43:09.113045  631152 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 19:43:09.114409  631152 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 19:43:09.116649  631152 config.go:182] Loaded profile config "auto-387383": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:43:09.116917  631152 config.go:182] Loaded profile config "default-k8s-diff-port-813397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:43:09.117178  631152 config.go:182] Loaded profile config "kindnet-387383": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:43:09.117406  631152 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 19:43:09.144130  631152 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 19:43:09.144274  631152 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:43:09.219804  631152 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:57 OomKillDisable:false NGoroutines:69 SystemTime:2025-10-27 19:43:09.208452606 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:43:09.219968  631152 docker.go:318] overlay module found
	I1027 19:43:09.224829  631152 out.go:179] * Using the docker driver based on user configuration
	I1027 19:43:09.229063  631152 start.go:305] selected driver: docker
	I1027 19:43:09.229087  631152 start.go:925] validating driver "docker" against <nil>
	I1027 19:43:09.229099  631152 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 19:43:09.229761  631152 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:43:09.297708  631152 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:61 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-27 19:43:09.284768991 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:43:09.297923  631152 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1027 19:43:09.298177  631152 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 19:43:09.300298  631152 out.go:179] * Using Docker driver with root privileges
	I1027 19:43:09.301552  631152 cni.go:84] Creating CNI manager for "calico"
	I1027 19:43:09.301572  631152 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1027 19:43:09.301666  631152 start.go:349] cluster config:
	{Name:calico-387383 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-387383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:43:09.303049  631152 out.go:179] * Starting "calico-387383" primary control-plane node in "calico-387383" cluster
	I1027 19:43:09.304322  631152 cache.go:123] Beginning downloading kic base image for docker with crio
	I1027 19:43:09.305655  631152 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 19:43:09.307008  631152 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:43:09.307040  631152 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 19:43:09.307072  631152 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1027 19:43:09.307087  631152 cache.go:58] Caching tarball of preloaded images
	I1027 19:43:09.307227  631152 preload.go:233] Found /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1027 19:43:09.307243  631152 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 19:43:09.307348  631152 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/config.json ...
	I1027 19:43:09.307379  631152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/config.json: {Name:mk43f6d9384d0a21bf6f72b0ca8f08435e9c8cc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:43:09.330570  631152 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 19:43:09.330592  631152 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 19:43:09.330613  631152 cache.go:232] Successfully downloaded all kic artifacts
	I1027 19:43:09.330651  631152 start.go:360] acquireMachinesLock for calico-387383: {Name:mka12b625ec8304f9dc2737a01f90cd5d174feff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:43:09.330765  631152 start.go:364] duration metric: took 95.18µs to acquireMachinesLock for "calico-387383"
	I1027 19:43:09.330796  631152 start.go:93] Provisioning new machine with config: &{Name:calico-387383 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-387383 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 19:43:09.330874  631152 start.go:125] createHost starting for "" (driver="docker")
	W1027 19:43:08.920465  616341 pod_ready.go:104] pod "coredns-66bc5c9577-d2trp" is not "Ready", error: <nil>
	W1027 19:43:11.417729  616341 pod_ready.go:104] pod "coredns-66bc5c9577-d2trp" is not "Ready", error: <nil>
	I1027 19:43:08.974203  622136 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 19:43:08.974311  622136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:08.974368  622136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-387383 minikube.k8s.io/updated_at=2025_10_27T19_43_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f minikube.k8s.io/name=auto-387383 minikube.k8s.io/primary=true
	I1027 19:43:09.080630  622136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:09.080651  622136 ops.go:34] apiserver oom_adj: -16
	I1027 19:43:09.580959  622136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:10.081633  622136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:10.581241  622136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:11.081076  622136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:11.580724  622136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:12.081063  622136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:12.580913  622136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:13.081600  622136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:13.188668  622136 kubeadm.go:1113] duration metric: took 4.214426883s to wait for elevateKubeSystemPrivileges
	I1027 19:43:13.188707  622136 kubeadm.go:402] duration metric: took 16.879812759s to StartCluster
	I1027 19:43:13.188734  622136 settings.go:142] acquiring lock: {Name:mk8304c2106bf310642e0949fc0266ccb50f2f95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:43:13.188808  622136 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:43:13.190211  622136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/kubeconfig: {Name:mk24cbe512a6907c874f3fb7a85390a8f9fd2b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:43:13.265755  622136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 19:43:13.265780  622136 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 19:43:13.265878  622136 addons.go:69] Setting storage-provisioner=true in profile "auto-387383"
	I1027 19:43:13.265904  622136 addons.go:238] Setting addon storage-provisioner=true in "auto-387383"
	I1027 19:43:13.265743  622136 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 19:43:13.265914  622136 addons.go:69] Setting default-storageclass=true in profile "auto-387383"
	I1027 19:43:13.265937  622136 host.go:66] Checking if "auto-387383" exists ...
	I1027 19:43:13.265950  622136 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-387383"
	I1027 19:43:13.266526  622136 cli_runner.go:164] Run: docker container inspect auto-387383 --format={{.State.Status}}
	I1027 19:43:13.266557  622136 cli_runner.go:164] Run: docker container inspect auto-387383 --format={{.State.Status}}
	I1027 19:43:13.266820  622136 config.go:182] Loaded profile config "auto-387383": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:43:13.291397  622136 addons.go:238] Setting addon default-storageclass=true in "auto-387383"
	I1027 19:43:13.291451  622136 host.go:66] Checking if "auto-387383" exists ...
	I1027 19:43:13.291952  622136 cli_runner.go:164] Run: docker container inspect auto-387383 --format={{.State.Status}}
	I1027 19:43:13.315427  622136 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 19:43:13.315461  622136 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 19:43:13.315573  622136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-387383
	I1027 19:43:13.338266  622136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33475 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/auto-387383/id_rsa Username:docker}
	I1027 19:43:13.392825  622136 out.go:179] * Verifying Kubernetes components...
	I1027 19:43:13.392855  622136 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:43:08.817185  630779 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1027 19:43:08.817491  630779 start.go:159] libmachine.API.Create for "kindnet-387383" (driver="docker")
	I1027 19:43:08.817535  630779 client.go:168] LocalClient.Create starting
	I1027 19:43:08.817647  630779 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem
	I1027 19:43:08.817695  630779 main.go:141] libmachine: Decoding PEM data...
	I1027 19:43:08.817721  630779 main.go:141] libmachine: Parsing certificate...
	I1027 19:43:08.817804  630779 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem
	I1027 19:43:08.817841  630779 main.go:141] libmachine: Decoding PEM data...
	I1027 19:43:08.817856  630779 main.go:141] libmachine: Parsing certificate...
	I1027 19:43:08.818316  630779 cli_runner.go:164] Run: docker network inspect kindnet-387383 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 19:43:08.839177  630779 cli_runner.go:211] docker network inspect kindnet-387383 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 19:43:08.839282  630779 network_create.go:284] running [docker network inspect kindnet-387383] to gather additional debugging logs...
	I1027 19:43:08.839312  630779 cli_runner.go:164] Run: docker network inspect kindnet-387383
	W1027 19:43:08.862708  630779 cli_runner.go:211] docker network inspect kindnet-387383 returned with exit code 1
	I1027 19:43:08.862748  630779 network_create.go:287] error running [docker network inspect kindnet-387383]: docker network inspect kindnet-387383: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-387383 not found
	I1027 19:43:08.862766  630779 network_create.go:289] output of [docker network inspect kindnet-387383]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-387383 not found
	
	** /stderr **
	I1027 19:43:08.862951  630779 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 19:43:08.887850  630779 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-04e197bde7e8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:8c:cb:7c:68:31} reservation:<nil>}
	I1027 19:43:08.888795  630779 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e37fd2b092bc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:26:98:e3:c0:d9:8a} reservation:<nil>}
	I1027 19:43:08.889481  630779 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-bbd9ae70d20d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ea:7f:4f:eb:e4:a1} reservation:<nil>}
	I1027 19:43:08.890205  630779 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-20cd7dbe58eb IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:96:9c:3e:02:15:d8} reservation:<nil>}
	I1027 19:43:08.890784  630779 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-e5c60f1f40ae IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:6a:1e:24:48:2b:2f} reservation:<nil>}
	I1027 19:43:08.891510  630779 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f1e860}
	I1027 19:43:08.891543  630779 network_create.go:124] attempt to create docker network kindnet-387383 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1027 19:43:08.891607  630779 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-387383 kindnet-387383
	I1027 19:43:08.982459  630779 network_create.go:108] docker network kindnet-387383 192.168.94.0/24 created
	I1027 19:43:08.982496  630779 kic.go:121] calculated static IP "192.168.94.2" for the "kindnet-387383" container
	I1027 19:43:08.982584  630779 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 19:43:09.009857  630779 cli_runner.go:164] Run: docker volume create kindnet-387383 --label name.minikube.sigs.k8s.io=kindnet-387383 --label created_by.minikube.sigs.k8s.io=true
	I1027 19:43:09.037352  630779 oci.go:103] Successfully created a docker volume kindnet-387383
	I1027 19:43:09.037457  630779 cli_runner.go:164] Run: docker run --rm --name kindnet-387383-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-387383 --entrypoint /usr/bin/test -v kindnet-387383:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 19:43:09.547006  630779 oci.go:107] Successfully prepared a docker volume kindnet-387383
	I1027 19:43:09.547067  630779 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:43:09.547097  630779 kic.go:194] Starting extracting preloaded images to volume ...
	I1027 19:43:09.547228  630779 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-387383:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1027 19:43:13.465237  622136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 19:43:13.527655  622136 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:43:13.527685  622136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 19:43:13.527747  622136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-387383
	I1027 19:43:13.527661  622136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:43:13.548661  622136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33475 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/auto-387383/id_rsa Username:docker}
	I1027 19:43:13.613018  622136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 19:43:13.674620  622136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:43:09.332868  631152 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1027 19:43:09.333169  631152 start.go:159] libmachine.API.Create for "calico-387383" (driver="docker")
	I1027 19:43:09.333207  631152 client.go:168] LocalClient.Create starting
	I1027 19:43:09.333292  631152 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem
	I1027 19:43:09.333346  631152 main.go:141] libmachine: Decoding PEM data...
	I1027 19:43:09.333372  631152 main.go:141] libmachine: Parsing certificate...
	I1027 19:43:09.333459  631152 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem
	I1027 19:43:09.333491  631152 main.go:141] libmachine: Decoding PEM data...
	I1027 19:43:09.333503  631152 main.go:141] libmachine: Parsing certificate...
	I1027 19:43:09.333943  631152 cli_runner.go:164] Run: docker network inspect calico-387383 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 19:43:09.356477  631152 cli_runner.go:211] docker network inspect calico-387383 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 19:43:09.356594  631152 network_create.go:284] running [docker network inspect calico-387383] to gather additional debugging logs...
	I1027 19:43:09.356624  631152 cli_runner.go:164] Run: docker network inspect calico-387383
	W1027 19:43:09.377833  631152 cli_runner.go:211] docker network inspect calico-387383 returned with exit code 1
	I1027 19:43:09.377885  631152 network_create.go:287] error running [docker network inspect calico-387383]: docker network inspect calico-387383: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-387383 not found
	I1027 19:43:09.377909  631152 network_create.go:289] output of [docker network inspect calico-387383]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-387383 not found
	
	** /stderr **
	I1027 19:43:09.378072  631152 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 19:43:09.399669  631152 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-04e197bde7e8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:8c:cb:7c:68:31} reservation:<nil>}
	I1027 19:43:09.400481  631152 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e37fd2b092bc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:26:98:e3:c0:d9:8a} reservation:<nil>}
	I1027 19:43:09.400979  631152 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-bbd9ae70d20d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ea:7f:4f:eb:e4:a1} reservation:<nil>}
	I1027 19:43:09.401653  631152 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-20cd7dbe58eb IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:96:9c:3e:02:15:d8} reservation:<nil>}
	I1027 19:43:09.402241  631152 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-e5c60f1f40ae IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:6a:1e:24:48:2b:2f} reservation:<nil>}
	I1027 19:43:09.402949  631152 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-9609e5410315 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:8a:7e:34:6e:27:1e} reservation:<nil>}
	I1027 19:43:09.403864  631152 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f08a50}
	I1027 19:43:09.403887  631152 network_create.go:124] attempt to create docker network calico-387383 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1027 19:43:09.403942  631152 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-387383 calico-387383
	I1027 19:43:09.475070  631152 network_create.go:108] docker network calico-387383 192.168.103.0/24 created
	I1027 19:43:09.475112  631152 kic.go:121] calculated static IP "192.168.103.2" for the "calico-387383" container
	I1027 19:43:09.475213  631152 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 19:43:09.495750  631152 cli_runner.go:164] Run: docker volume create calico-387383 --label name.minikube.sigs.k8s.io=calico-387383 --label created_by.minikube.sigs.k8s.io=true
	I1027 19:43:09.517894  631152 oci.go:103] Successfully created a docker volume calico-387383
	I1027 19:43:09.518011  631152 cli_runner.go:164] Run: docker run --rm --name calico-387383-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-387383 --entrypoint /usr/bin/test -v calico-387383:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 19:43:10.423477  631152 oci.go:107] Successfully prepared a docker volume calico-387383
	I1027 19:43:10.423540  631152 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:43:10.423567  631152 kic.go:194] Starting extracting preloaded images to volume ...
	I1027 19:43:10.423658  631152 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-387383:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1027 19:43:13.945644  622136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:43:14.246692  622136 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1027 19:43:15.037751  622136 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-387383" context rescaled to 1 replicas
	I1027 19:43:15.674041  622136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.999376115s)
	I1027 19:43:15.674102  622136 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.7284134s)
	I1027 19:43:15.675048  622136 node_ready.go:35] waiting up to 15m0s for node "auto-387383" to be "Ready" ...
	I1027 19:43:15.804289  622136 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1027 19:43:13.465928  616341 pod_ready.go:104] pod "coredns-66bc5c9577-d2trp" is not "Ready", error: <nil>
	W1027 19:43:15.917385  616341 pod_ready.go:104] pod "coredns-66bc5c9577-d2trp" is not "Ready", error: <nil>
	I1027 19:43:15.836937  630779 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-387383:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (6.289650323s)
	I1027 19:43:15.836977  630779 kic.go:203] duration metric: took 6.289877797s to extract preloaded images to volume ...
	W1027 19:43:15.837071  630779 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1027 19:43:15.837105  630779 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1027 19:43:15.837173  630779 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1027 19:43:15.914749  630779 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-387383 --name kindnet-387383 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-387383 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-387383 --network kindnet-387383 --ip 192.168.94.2 --volume kindnet-387383:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1027 19:43:16.230686  630779 cli_runner.go:164] Run: docker container inspect kindnet-387383 --format={{.State.Running}}
	I1027 19:43:16.251881  630779 cli_runner.go:164] Run: docker container inspect kindnet-387383 --format={{.State.Status}}
	I1027 19:43:16.272639  630779 cli_runner.go:164] Run: docker exec kindnet-387383 stat /var/lib/dpkg/alternatives/iptables
	I1027 19:43:16.325967  630779 oci.go:144] the created container "kindnet-387383" has a running status.
	I1027 19:43:16.326030  630779 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21801-352833/.minikube/machines/kindnet-387383/id_rsa...
	I1027 19:43:16.397472  630779 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21801-352833/.minikube/machines/kindnet-387383/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1027 19:43:16.434338  630779 cli_runner.go:164] Run: docker container inspect kindnet-387383 --format={{.State.Status}}
	I1027 19:43:16.455490  630779 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1027 19:43:16.455511  630779 kic_runner.go:114] Args: [docker exec --privileged kindnet-387383 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1027 19:43:16.523822  630779 cli_runner.go:164] Run: docker container inspect kindnet-387383 --format={{.State.Status}}
	I1027 19:43:16.550257  630779 machine.go:93] provisionDockerMachine start ...
	I1027 19:43:16.550373  630779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-387383
	I1027 19:43:16.573820  630779 main.go:141] libmachine: Using SSH client type: native
	I1027 19:43:16.574147  630779 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33480 <nil> <nil>}
	I1027 19:43:16.574170  630779 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 19:43:16.575088  630779 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34346->127.0.0.1:33480: read: connection reset by peer
	I1027 19:43:15.806705  622136 addons.go:514] duration metric: took 2.540909161s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1027 19:43:17.678675  622136 node_ready.go:57] node "auto-387383" has "Ready":"False" status (will retry)
	I1027 19:43:15.934880  631152 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-387383:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.511161254s)
	I1027 19:43:15.934918  631152 kic.go:203] duration metric: took 5.511345731s to extract preloaded images to volume ...
	W1027 19:43:15.935080  631152 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1027 19:43:15.935124  631152 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1027 19:43:15.935199  631152 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1027 19:43:16.003629  631152 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-387383 --name calico-387383 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-387383 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-387383 --network calico-387383 --ip 192.168.103.2 --volume calico-387383:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1027 19:43:16.352100  631152 cli_runner.go:164] Run: docker container inspect calico-387383 --format={{.State.Running}}
	I1027 19:43:16.375495  631152 cli_runner.go:164] Run: docker container inspect calico-387383 --format={{.State.Status}}
	I1027 19:43:16.403289  631152 cli_runner.go:164] Run: docker exec calico-387383 stat /var/lib/dpkg/alternatives/iptables
	I1027 19:43:16.456486  631152 oci.go:144] the created container "calico-387383" has a running status.
	I1027 19:43:16.456566  631152 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21801-352833/.minikube/machines/calico-387383/id_rsa...
	I1027 19:43:16.537928  631152 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21801-352833/.minikube/machines/calico-387383/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1027 19:43:16.568927  631152 cli_runner.go:164] Run: docker container inspect calico-387383 --format={{.State.Status}}
	I1027 19:43:16.594442  631152 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1027 19:43:16.594469  631152 kic_runner.go:114] Args: [docker exec --privileged calico-387383 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1027 19:43:16.654339  631152 cli_runner.go:164] Run: docker container inspect calico-387383 --format={{.State.Status}}
	I1027 19:43:16.679232  631152 machine.go:93] provisionDockerMachine start ...
	I1027 19:43:16.679342  631152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-387383
	I1027 19:43:16.706550  631152 main.go:141] libmachine: Using SSH client type: native
	I1027 19:43:16.706929  631152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33485 <nil> <nil>}
	I1027 19:43:16.706951  631152 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 19:43:16.707765  631152 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44622->127.0.0.1:33485: read: connection reset by peer
	I1027 19:43:19.856595  631152 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-387383
	
	I1027 19:43:19.856635  631152 ubuntu.go:182] provisioning hostname "calico-387383"
	I1027 19:43:19.856738  631152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-387383
	I1027 19:43:19.877187  631152 main.go:141] libmachine: Using SSH client type: native
	I1027 19:43:19.877424  631152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33485 <nil> <nil>}
	I1027 19:43:19.877441  631152 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-387383 && echo "calico-387383" | sudo tee /etc/hostname
	I1027 19:43:20.032968  631152 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-387383
	
	I1027 19:43:20.033057  631152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-387383
	I1027 19:43:20.053188  631152 main.go:141] libmachine: Using SSH client type: native
	I1027 19:43:20.053429  631152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33485 <nil> <nil>}
	I1027 19:43:20.053446  631152 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-387383' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-387383/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-387383' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 19:43:20.197291  631152 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 19:43:20.197325  631152 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21801-352833/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-352833/.minikube}
	I1027 19:43:20.197353  631152 ubuntu.go:190] setting up certificates
	I1027 19:43:20.197364  631152 provision.go:84] configureAuth start
	I1027 19:43:20.197433  631152 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-387383
	I1027 19:43:20.217002  631152 provision.go:143] copyHostCerts
	I1027 19:43:20.217072  631152 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem, removing ...
	I1027 19:43:20.217087  631152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem
	I1027 19:43:20.217193  631152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem (1078 bytes)
	I1027 19:43:20.217313  631152 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem, removing ...
	I1027 19:43:20.217330  631152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem
	I1027 19:43:20.217354  631152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem (1123 bytes)
	I1027 19:43:20.217425  631152 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem, removing ...
	I1027 19:43:20.217432  631152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem
	I1027 19:43:20.217450  631152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem (1679 bytes)
	I1027 19:43:20.217563  631152 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem org=jenkins.calico-387383 san=[127.0.0.1 192.168.103.2 calico-387383 localhost minikube]
	I1027 19:43:20.418403  631152 provision.go:177] copyRemoteCerts
	I1027 19:43:20.418458  631152 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 19:43:20.418511  631152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-387383
	I1027 19:43:20.439233  631152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33485 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/calico-387383/id_rsa Username:docker}
	I1027 19:43:20.542376  631152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 19:43:20.564091  631152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1027 19:43:20.584859  631152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 19:43:20.606040  631152 provision.go:87] duration metric: took 408.66026ms to configureAuth
	I1027 19:43:20.606083  631152 ubuntu.go:206] setting minikube options for container-runtime
	I1027 19:43:20.606356  631152 config.go:182] Loaded profile config "calico-387383": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:43:20.606475  631152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-387383
	I1027 19:43:20.626268  631152 main.go:141] libmachine: Using SSH client type: native
	I1027 19:43:20.626568  631152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33485 <nil> <nil>}
	I1027 19:43:20.626604  631152 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 19:43:20.890051  631152 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 19:43:20.890095  631152 machine.go:96] duration metric: took 4.210822487s to provisionDockerMachine
	I1027 19:43:20.890106  631152 client.go:171] duration metric: took 11.556890851s to LocalClient.Create
	I1027 19:43:20.890127  631152 start.go:167] duration metric: took 11.556960745s to libmachine.API.Create "calico-387383"
	I1027 19:43:20.890154  631152 start.go:293] postStartSetup for "calico-387383" (driver="docker")
	I1027 19:43:20.890168  631152 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 19:43:20.890231  631152 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 19:43:20.890284  631152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-387383
	I1027 19:43:20.910483  631152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33485 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/calico-387383/id_rsa Username:docker}
	I1027 19:43:21.018526  631152 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 19:43:21.022867  631152 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 19:43:21.022904  631152 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 19:43:21.022917  631152 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-352833/.minikube/addons for local assets ...
	I1027 19:43:21.022985  631152 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-352833/.minikube/files for local assets ...
	I1027 19:43:21.023107  631152 filesync.go:149] local asset: /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem -> 3564152.pem in /etc/ssl/certs
	I1027 19:43:21.023265  631152 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 19:43:21.032414  631152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem --> /etc/ssl/certs/3564152.pem (1708 bytes)
	I1027 19:43:21.055283  631152 start.go:296] duration metric: took 165.110581ms for postStartSetup
	I1027 19:43:21.055681  631152 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-387383
	I1027 19:43:21.076627  631152 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/config.json ...
	I1027 19:43:21.076926  631152 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 19:43:21.076972  631152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-387383
	I1027 19:43:21.097385  631152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33485 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/calico-387383/id_rsa Username:docker}
	I1027 19:43:21.197560  631152 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 19:43:21.202675  631152 start.go:128] duration metric: took 11.871778512s to createHost
	I1027 19:43:21.202706  631152 start.go:83] releasing machines lock for "calico-387383", held for 11.871926694s
	I1027 19:43:21.202790  631152 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-387383
	I1027 19:43:21.223949  631152 ssh_runner.go:195] Run: cat /version.json
	I1027 19:43:21.224039  631152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-387383
	I1027 19:43:21.224042  631152 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 19:43:21.224129  631152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-387383
	I1027 19:43:21.244834  631152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33485 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/calico-387383/id_rsa Username:docker}
	I1027 19:43:21.246461  631152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33485 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/calico-387383/id_rsa Username:docker}
	I1027 19:43:21.412070  631152 ssh_runner.go:195] Run: systemctl --version
	I1027 19:43:21.420361  631152 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 19:43:21.459800  631152 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 19:43:21.464983  631152 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 19:43:21.465041  631152 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 19:43:21.494149  631152 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1027 19:43:21.494179  631152 start.go:495] detecting cgroup driver to use...
	I1027 19:43:21.494213  631152 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 19:43:21.494255  631152 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 19:43:21.511679  631152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 19:43:21.524918  631152 docker.go:218] disabling cri-docker service (if available) ...
	I1027 19:43:21.524971  631152 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 19:43:21.542994  631152 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 19:43:21.563418  631152 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 19:43:21.659259  631152 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 19:43:21.758619  631152 docker.go:234] disabling docker service ...
	I1027 19:43:21.758694  631152 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 19:43:21.783969  631152 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 19:43:21.798452  631152 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 19:43:21.898798  631152 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 19:43:21.998106  631152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 19:43:22.015909  631152 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 19:43:22.031653  631152 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 19:43:22.031720  631152 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:43:22.043307  631152 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 19:43:22.043374  631152 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:43:22.054367  631152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:43:22.064498  631152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:43:22.076286  631152 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 19:43:22.085897  631152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:43:22.099405  631152 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:43:22.116718  631152 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:43:22.127488  631152 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 19:43:22.136493  631152 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 19:43:22.145795  631152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:43:22.236425  631152 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 19:43:22.354757  631152 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 19:43:22.354829  631152 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 19:43:22.359412  631152 start.go:563] Will wait 60s for crictl version
	I1027 19:43:22.359472  631152 ssh_runner.go:195] Run: which crictl
	I1027 19:43:22.363706  631152 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 19:43:22.395607  631152 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 19:43:22.395703  631152 ssh_runner.go:195] Run: crio --version
	I1027 19:43:22.431702  631152 ssh_runner.go:195] Run: crio --version
	I1027 19:43:22.469338  631152 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 19:43:19.719593  630779 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-387383
	
	I1027 19:43:19.719650  630779 ubuntu.go:182] provisioning hostname "kindnet-387383"
	I1027 19:43:19.719742  630779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-387383
	I1027 19:43:19.741526  630779 main.go:141] libmachine: Using SSH client type: native
	I1027 19:43:19.741757  630779 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33480 <nil> <nil>}
	I1027 19:43:19.741771  630779 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-387383 && echo "kindnet-387383" | sudo tee /etc/hostname
	I1027 19:43:19.898294  630779 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-387383
	
	I1027 19:43:19.898383  630779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-387383
	I1027 19:43:19.919072  630779 main.go:141] libmachine: Using SSH client type: native
	I1027 19:43:19.919376  630779 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33480 <nil> <nil>}
	I1027 19:43:19.919408  630779 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-387383' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-387383/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-387383' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 19:43:20.066062  630779 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 19:43:20.066095  630779 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21801-352833/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-352833/.minikube}
	I1027 19:43:20.066120  630779 ubuntu.go:190] setting up certificates
	I1027 19:43:20.066146  630779 provision.go:84] configureAuth start
	I1027 19:43:20.066215  630779 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-387383
	I1027 19:43:20.086947  630779 provision.go:143] copyHostCerts
	I1027 19:43:20.087022  630779 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem, removing ...
	I1027 19:43:20.087035  630779 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem
	I1027 19:43:20.087103  630779 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem (1078 bytes)
	I1027 19:43:20.087314  630779 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem, removing ...
	I1027 19:43:20.087331  630779 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem
	I1027 19:43:20.087367  630779 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem (1123 bytes)
	I1027 19:43:20.087431  630779 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem, removing ...
	I1027 19:43:20.087438  630779 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem
	I1027 19:43:20.087462  630779 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem (1679 bytes)
	I1027 19:43:20.087521  630779 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem org=jenkins.kindnet-387383 san=[127.0.0.1 192.168.94.2 kindnet-387383 localhost minikube]
	I1027 19:43:20.557101  630779 provision.go:177] copyRemoteCerts
	I1027 19:43:20.557203  630779 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 19:43:20.557252  630779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-387383
	I1027 19:43:20.577798  630779 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33480 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/kindnet-387383/id_rsa Username:docker}
	I1027 19:43:20.682080  630779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 19:43:20.703570  630779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1027 19:43:20.723214  630779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 19:43:20.743034  630779 provision.go:87] duration metric: took 676.870448ms to configureAuth
	I1027 19:43:20.743071  630779 ubuntu.go:206] setting minikube options for container-runtime
	I1027 19:43:20.743290  630779 config.go:182] Loaded profile config "kindnet-387383": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:43:20.743410  630779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-387383
	I1027 19:43:20.762593  630779 main.go:141] libmachine: Using SSH client type: native
	I1027 19:43:20.762878  630779 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33480 <nil> <nil>}
	I1027 19:43:20.762899  630779 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 19:43:21.030290  630779 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 19:43:21.030324  630779 machine.go:96] duration metric: took 4.480039911s to provisionDockerMachine
	I1027 19:43:21.030338  630779 client.go:171] duration metric: took 12.212791881s to LocalClient.Create
	I1027 19:43:21.030362  630779 start.go:167] duration metric: took 12.212872727s to libmachine.API.Create "kindnet-387383"
	I1027 19:43:21.030372  630779 start.go:293] postStartSetup for "kindnet-387383" (driver="docker")
	I1027 19:43:21.030384  630779 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 19:43:21.030460  630779 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 19:43:21.030523  630779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-387383
	I1027 19:43:21.050743  630779 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33480 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/kindnet-387383/id_rsa Username:docker}
	I1027 19:43:21.155355  630779 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 19:43:21.159584  630779 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 19:43:21.159624  630779 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 19:43:21.159637  630779 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-352833/.minikube/addons for local assets ...
	I1027 19:43:21.159704  630779 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-352833/.minikube/files for local assets ...
	I1027 19:43:21.159819  630779 filesync.go:149] local asset: /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem -> 3564152.pem in /etc/ssl/certs
	I1027 19:43:21.159979  630779 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 19:43:21.168867  630779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem --> /etc/ssl/certs/3564152.pem (1708 bytes)
	I1027 19:43:21.191858  630779 start.go:296] duration metric: took 161.468893ms for postStartSetup
	I1027 19:43:21.192229  630779 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-387383
	I1027 19:43:21.211761  630779 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/config.json ...
	I1027 19:43:21.212170  630779 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 19:43:21.212235  630779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-387383
	I1027 19:43:21.236005  630779 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33480 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/kindnet-387383/id_rsa Username:docker}
	I1027 19:43:21.339702  630779 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 19:43:21.345006  630779 start.go:128] duration metric: took 12.530050109s to createHost
	I1027 19:43:21.345038  630779 start.go:83] releasing machines lock for "kindnet-387383", held for 12.530215173s
	I1027 19:43:21.345121  630779 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-387383
	I1027 19:43:21.365270  630779 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 19:43:21.365326  630779 ssh_runner.go:195] Run: cat /version.json
	I1027 19:43:21.365378  630779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-387383
	I1027 19:43:21.365426  630779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-387383
	I1027 19:43:21.386361  630779 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33480 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/kindnet-387383/id_rsa Username:docker}
	I1027 19:43:21.386733  630779 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33480 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/kindnet-387383/id_rsa Username:docker}
	I1027 19:43:21.563887  630779 ssh_runner.go:195] Run: systemctl --version
	I1027 19:43:21.570989  630779 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 19:43:21.615251  630779 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 19:43:21.620437  630779 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 19:43:21.620514  630779 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 19:43:21.647793  630779 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1027 19:43:21.647833  630779 start.go:495] detecting cgroup driver to use...
	I1027 19:43:21.647874  630779 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 19:43:21.647939  630779 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 19:43:21.668017  630779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 19:43:21.682051  630779 docker.go:218] disabling cri-docker service (if available) ...
	I1027 19:43:21.682119  630779 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 19:43:21.705209  630779 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 19:43:21.724729  630779 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 19:43:21.814826  630779 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 19:43:21.923398  630779 docker.go:234] disabling docker service ...
	I1027 19:43:21.923478  630779 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 19:43:21.948096  630779 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 19:43:21.963361  630779 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 19:43:22.059636  630779 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 19:43:22.155384  630779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 19:43:22.170522  630779 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 19:43:22.191386  630779 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 19:43:22.191444  630779 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:43:22.203419  630779 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 19:43:22.203497  630779 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:43:22.214478  630779 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:43:22.224940  630779 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:43:22.235818  630779 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 19:43:22.245339  630779 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:43:22.256385  630779 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:43:22.272844  630779 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:43:22.283854  630779 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 19:43:22.293236  630779 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 19:43:22.302285  630779 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:43:22.400841  630779 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 19:43:22.517558  630779 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 19:43:22.517637  630779 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 19:43:22.522369  630779 start.go:563] Will wait 60s for crictl version
	I1027 19:43:22.522437  630779 ssh_runner.go:195] Run: which crictl
	I1027 19:43:22.526820  630779 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 19:43:22.554707  630779 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 19:43:22.554779  630779 ssh_runner.go:195] Run: crio --version
	I1027 19:43:22.586801  630779 ssh_runner.go:195] Run: crio --version
	I1027 19:43:22.623795  630779 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1027 19:43:18.417602  616341 pod_ready.go:104] pod "coredns-66bc5c9577-d2trp" is not "Ready", error: <nil>
	W1027 19:43:20.418375  616341 pod_ready.go:104] pod "coredns-66bc5c9577-d2trp" is not "Ready", error: <nil>
	I1027 19:43:22.418998  616341 pod_ready.go:94] pod "coredns-66bc5c9577-d2trp" is "Ready"
	I1027 19:43:22.419035  616341 pod_ready.go:86] duration metric: took 38.507791483s for pod "coredns-66bc5c9577-d2trp" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:43:22.422396  616341 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-813397" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:43:22.427833  616341 pod_ready.go:94] pod "etcd-default-k8s-diff-port-813397" is "Ready"
	I1027 19:43:22.427863  616341 pod_ready.go:86] duration metric: took 5.434462ms for pod "etcd-default-k8s-diff-port-813397" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:43:22.430801  616341 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-813397" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:43:22.435963  616341 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-813397" is "Ready"
	I1027 19:43:22.435999  616341 pod_ready.go:86] duration metric: took 5.170955ms for pod "kube-apiserver-default-k8s-diff-port-813397" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:43:22.438570  616341 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-813397" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:43:22.615605  616341 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-813397" is "Ready"
	I1027 19:43:22.615650  616341 pod_ready.go:86] duration metric: took 177.051825ms for pod "kube-controller-manager-default-k8s-diff-port-813397" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:43:22.816514  616341 pod_ready.go:83] waiting for pod "kube-proxy-bldc8" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:43:22.625216  630779 cli_runner.go:164] Run: docker network inspect kindnet-387383 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 19:43:22.644772  630779 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1027 19:43:22.648923  630779 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 19:43:22.660098  630779 kubeadm.go:883] updating cluster {Name:kindnet-387383 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-387383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 19:43:22.660242  630779 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:43:22.660286  630779 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 19:43:22.698154  630779 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 19:43:22.698177  630779 crio.go:433] Images already preloaded, skipping extraction
	I1027 19:43:22.698224  630779 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 19:43:22.726913  630779 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 19:43:22.726943  630779 cache_images.go:85] Images are preloaded, skipping loading
	I1027 19:43:22.726954  630779 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1027 19:43:22.727065  630779 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-387383 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kindnet-387383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1027 19:43:22.727175  630779 ssh_runner.go:195] Run: crio config
	I1027 19:43:22.792710  630779 cni.go:84] Creating CNI manager for "kindnet"
	I1027 19:43:22.792744  630779 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 19:43:22.792778  630779 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-387383 NodeName:kindnet-387383 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 19:43:22.792916  630779 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-387383"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 19:43:22.793026  630779 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 19:43:22.802393  630779 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 19:43:22.802458  630779 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 19:43:22.812098  630779 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I1027 19:43:22.826897  630779 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 19:43:22.846034  630779 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1027 19:43:22.861978  630779 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1027 19:43:22.867028  630779 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 19:43:22.880030  630779 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:43:22.980911  630779 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:43:23.008304  630779 certs.go:69] Setting up /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383 for IP: 192.168.94.2
	I1027 19:43:23.008329  630779 certs.go:195] generating shared ca certs ...
	I1027 19:43:23.008352  630779 certs.go:227] acquiring lock for ca certs: {Name:mk4bdbca32068f6f817fc35fdc496e961dc3e0d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:43:23.008530  630779 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21801-352833/.minikube/ca.key
	I1027 19:43:23.008591  630779 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.key
	I1027 19:43:23.008612  630779 certs.go:257] generating profile certs ...
	I1027 19:43:23.008682  630779 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/client.key
	I1027 19:43:23.008700  630779 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/client.crt with IP's: []
	I1027 19:43:23.280372  630779 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/client.crt ...
	I1027 19:43:23.280468  630779 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/client.crt: {Name:mkc5cdc763554b6306b0c8faa7cf27304253c7b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:43:23.280651  630779 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/client.key ...
	I1027 19:43:23.280668  630779 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/client.key: {Name:mk883d68ae1f564089d4a6589f22eb59db09b659 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:43:23.280775  630779 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/apiserver.key.035362ec
	I1027 19:43:23.280800  630779 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/apiserver.crt.035362ec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1027 19:43:23.215971  616341 pod_ready.go:94] pod "kube-proxy-bldc8" is "Ready"
	I1027 19:43:23.216004  616341 pod_ready.go:86] duration metric: took 399.460648ms for pod "kube-proxy-bldc8" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:43:23.417054  616341 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-813397" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:43:23.815597  616341 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-813397" is "Ready"
	I1027 19:43:23.815631  616341 pod_ready.go:86] duration metric: took 398.552014ms for pod "kube-scheduler-default-k8s-diff-port-813397" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:43:23.815644  616341 pod_ready.go:40] duration metric: took 39.910056182s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 19:43:23.867820  616341 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 19:43:23.870183  616341 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-813397" cluster and "default" namespace by default
	W1027 19:43:20.179062  622136 node_ready.go:57] node "auto-387383" has "Ready":"False" status (will retry)
	W1027 19:43:22.183338  622136 node_ready.go:57] node "auto-387383" has "Ready":"False" status (will retry)
	I1027 19:43:22.470830  631152 cli_runner.go:164] Run: docker network inspect calico-387383 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 19:43:22.490027  631152 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1027 19:43:22.495175  631152 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 19:43:22.507697  631152 kubeadm.go:883] updating cluster {Name:calico-387383 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-387383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 19:43:22.507836  631152 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:43:22.507879  631152 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 19:43:22.543147  631152 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 19:43:22.543171  631152 crio.go:433] Images already preloaded, skipping extraction
	I1027 19:43:22.543232  631152 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 19:43:22.573557  631152 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 19:43:22.573587  631152 cache_images.go:85] Images are preloaded, skipping loading
	I1027 19:43:22.573597  631152 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1027 19:43:22.573717  631152 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-387383 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:calico-387383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1027 19:43:22.573817  631152 ssh_runner.go:195] Run: crio config
	I1027 19:43:22.625217  631152 cni.go:84] Creating CNI manager for "calico"
	I1027 19:43:22.625247  631152 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 19:43:22.625276  631152 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-387383 NodeName:calico-387383 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 19:43:22.625434  631152 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-387383"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 19:43:22.625495  631152 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 19:43:22.634278  631152 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 19:43:22.634350  631152 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 19:43:22.643250  631152 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I1027 19:43:22.657812  631152 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 19:43:22.676259  631152 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1027 19:43:22.692916  631152 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1027 19:43:22.697371  631152 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 19:43:22.709020  631152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:43:22.809735  631152 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:43:22.839913  631152 certs.go:69] Setting up /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383 for IP: 192.168.103.2
	I1027 19:43:22.839941  631152 certs.go:195] generating shared ca certs ...
	I1027 19:43:22.839964  631152 certs.go:227] acquiring lock for ca certs: {Name:mk4bdbca32068f6f817fc35fdc496e961dc3e0d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:43:22.840124  631152 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21801-352833/.minikube/ca.key
	I1027 19:43:22.840199  631152 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.key
	I1027 19:43:22.840212  631152 certs.go:257] generating profile certs ...
	I1027 19:43:22.840278  631152 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/client.key
	I1027 19:43:22.840315  631152 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/client.crt with IP's: []
	I1027 19:43:23.067367  631152 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/client.crt ...
	I1027 19:43:23.067406  631152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/client.crt: {Name:mk11afdb6b68f3344d9356c14824a16d6455b940 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:43:23.067634  631152 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/client.key ...
	I1027 19:43:23.067649  631152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/client.key: {Name:mk49ed5c3ce77620018f632b1ea9e8ac53ba2830 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:43:23.067758  631152 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/apiserver.key.ba71f923
	I1027 19:43:23.067779  631152 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/apiserver.crt.ba71f923 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1027 19:43:23.279834  631152 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/apiserver.crt.ba71f923 ...
	I1027 19:43:23.279866  631152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/apiserver.crt.ba71f923: {Name:mke7b18960a6ef12bc322a1683e081c39a475326 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:43:23.280083  631152 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/apiserver.key.ba71f923 ...
	I1027 19:43:23.280148  631152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/apiserver.key.ba71f923: {Name:mk74cdf59cd5a60002af90895cc36d350b8e8acb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:43:23.280279  631152 certs.go:382] copying /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/apiserver.crt.ba71f923 -> /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/apiserver.crt
	I1027 19:43:23.280399  631152 certs.go:386] copying /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/apiserver.key.ba71f923 -> /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/apiserver.key
	I1027 19:43:23.280653  631152 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/proxy-client.key
	I1027 19:43:23.280678  631152 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/proxy-client.crt with IP's: []
	I1027 19:43:23.394828  631152 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/proxy-client.crt ...
	I1027 19:43:23.394864  631152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/proxy-client.crt: {Name:mke44f35ad317a0aae3a2a25c289c25d96b92520 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:43:23.395097  631152 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/proxy-client.key ...
	I1027 19:43:23.395119  631152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/proxy-client.key: {Name:mka46a31bf5c94f35a4cbf64d912bf69a96af663 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:43:23.395399  631152 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/356415.pem (1338 bytes)
	W1027 19:43:23.395447  631152 certs.go:480] ignoring /home/jenkins/minikube-integration/21801-352833/.minikube/certs/356415_empty.pem, impossibly tiny 0 bytes
	I1027 19:43:23.395464  631152 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 19:43:23.395496  631152 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem (1078 bytes)
	I1027 19:43:23.395525  631152 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem (1123 bytes)
	I1027 19:43:23.395562  631152 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem (1679 bytes)
	I1027 19:43:23.395619  631152 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem (1708 bytes)
	I1027 19:43:23.396252  631152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 19:43:23.420838  631152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 19:43:23.440802  631152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 19:43:23.459990  631152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1027 19:43:23.479913  631152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1027 19:43:23.500072  631152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 19:43:23.521515  631152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 19:43:23.542836  631152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 19:43:23.563387  631152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem --> /usr/share/ca-certificates/3564152.pem (1708 bytes)
	I1027 19:43:23.586867  631152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 19:43:23.606875  631152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/certs/356415.pem --> /usr/share/ca-certificates/356415.pem (1338 bytes)
	I1027 19:43:23.627985  631152 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 19:43:23.642297  631152 ssh_runner.go:195] Run: openssl version
	I1027 19:43:23.649658  631152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3564152.pem && ln -fs /usr/share/ca-certificates/3564152.pem /etc/ssl/certs/3564152.pem"
	I1027 19:43:23.659825  631152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3564152.pem
	I1027 19:43:23.664435  631152 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 19:02 /usr/share/ca-certificates/3564152.pem
	I1027 19:43:23.664511  631152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3564152.pem
	I1027 19:43:23.702799  631152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3564152.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 19:43:23.714511  631152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 19:43:23.725054  631152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:43:23.730727  631152 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:43:23.730797  631152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:43:23.771650  631152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 19:43:23.781460  631152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/356415.pem && ln -fs /usr/share/ca-certificates/356415.pem /etc/ssl/certs/356415.pem"
	I1027 19:43:23.791062  631152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356415.pem
	I1027 19:43:23.795616  631152 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 19:02 /usr/share/ca-certificates/356415.pem
	I1027 19:43:23.795680  631152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356415.pem
	I1027 19:43:23.839576  631152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/356415.pem /etc/ssl/certs/51391683.0"
	I1027 19:43:23.849374  631152 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 19:43:23.854051  631152 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 19:43:23.854113  631152 kubeadm.go:400] StartCluster: {Name:calico-387383 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-387383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:43:23.854219  631152 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 19:43:23.854290  631152 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 19:43:23.887737  631152 cri.go:89] found id: ""
	I1027 19:43:23.887817  631152 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 19:43:23.903148  631152 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 19:43:23.911879  631152 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1027 19:43:23.911956  631152 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 19:43:23.921562  631152 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 19:43:23.921580  631152 kubeadm.go:157] found existing configuration files:
	
	I1027 19:43:23.921629  631152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 19:43:23.931181  631152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 19:43:23.931235  631152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 19:43:23.940349  631152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 19:43:23.949821  631152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 19:43:23.949872  631152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 19:43:23.958805  631152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 19:43:23.969243  631152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 19:43:23.969315  631152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 19:43:23.978576  631152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 19:43:23.988621  631152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 19:43:23.988685  631152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 19:43:23.998961  631152 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 19:43:24.070980  631152 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1027 19:43:23.684121  630779 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/apiserver.crt.035362ec ...
	I1027 19:43:23.684160  630779 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/apiserver.crt.035362ec: {Name:mkac5cc64507c3ad048c7d49e398887e77ecec0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:43:23.684404  630779 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/apiserver.key.035362ec ...
	I1027 19:43:23.684425  630779 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/apiserver.key.035362ec: {Name:mk60c42bc6264d59b9a6ac8cbee89223248b7d7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:43:23.684537  630779 certs.go:382] copying /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/apiserver.crt.035362ec -> /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/apiserver.crt
	I1027 19:43:23.684646  630779 certs.go:386] copying /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/apiserver.key.035362ec -> /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/apiserver.key
	I1027 19:43:23.684731  630779 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/proxy-client.key
	I1027 19:43:23.684751  630779 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/proxy-client.crt with IP's: []
	I1027 19:43:23.830266  630779 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/proxy-client.crt ...
	I1027 19:43:23.830310  630779 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/proxy-client.crt: {Name:mkef0d2b3404a5e128d7881dec6d699ea82a73c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:43:23.830539  630779 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/proxy-client.key ...
	I1027 19:43:23.830565  630779 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/proxy-client.key: {Name:mk4633005cf6602ccf6ea736710ef9b598373d05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:43:23.830836  630779 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/356415.pem (1338 bytes)
	W1027 19:43:23.830923  630779 certs.go:480] ignoring /home/jenkins/minikube-integration/21801-352833/.minikube/certs/356415_empty.pem, impossibly tiny 0 bytes
	I1027 19:43:23.830938  630779 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 19:43:23.830974  630779 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem (1078 bytes)
	I1027 19:43:23.831019  630779 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem (1123 bytes)
	I1027 19:43:23.831052  630779 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem (1679 bytes)
	I1027 19:43:23.831111  630779 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem (1708 bytes)
	I1027 19:43:23.831781  630779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 19:43:23.853509  630779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 19:43:23.874759  630779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 19:43:23.899954  630779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1027 19:43:23.921532  630779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1027 19:43:23.942346  630779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 19:43:23.965787  630779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 19:43:23.988343  630779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1027 19:43:24.011636  630779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem --> /usr/share/ca-certificates/3564152.pem (1708 bytes)
	I1027 19:43:24.035513  630779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 19:43:24.058580  630779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/certs/356415.pem --> /usr/share/ca-certificates/356415.pem (1338 bytes)
	I1027 19:43:24.080867  630779 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 19:43:24.096288  630779 ssh_runner.go:195] Run: openssl version
	I1027 19:43:24.103716  630779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3564152.pem && ln -fs /usr/share/ca-certificates/3564152.pem /etc/ssl/certs/3564152.pem"
	I1027 19:43:24.114015  630779 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3564152.pem
	I1027 19:43:24.119280  630779 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 19:02 /usr/share/ca-certificates/3564152.pem
	I1027 19:43:24.119352  630779 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3564152.pem
	I1027 19:43:24.161780  630779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3564152.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 19:43:24.172593  630779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 19:43:24.186031  630779 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:43:24.192119  630779 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:43:24.192268  630779 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:43:24.237686  630779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 19:43:24.247867  630779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/356415.pem && ln -fs /usr/share/ca-certificates/356415.pem /etc/ssl/certs/356415.pem"
	I1027 19:43:24.257623  630779 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356415.pem
	I1027 19:43:24.262356  630779 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 19:02 /usr/share/ca-certificates/356415.pem
	I1027 19:43:24.262454  630779 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356415.pem
	I1027 19:43:24.302894  630779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/356415.pem /etc/ssl/certs/51391683.0"
	I1027 19:43:24.312688  630779 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 19:43:24.316629  630779 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 19:43:24.316699  630779 kubeadm.go:400] StartCluster: {Name:kindnet-387383 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-387383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:43:24.316792  630779 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 19:43:24.316845  630779 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 19:43:24.348336  630779 cri.go:89] found id: ""
	I1027 19:43:24.348416  630779 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 19:43:24.357107  630779 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 19:43:24.366006  630779 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1027 19:43:24.366071  630779 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 19:43:24.374833  630779 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 19:43:24.374851  630779 kubeadm.go:157] found existing configuration files:
	
	I1027 19:43:24.374900  630779 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 19:43:24.384062  630779 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 19:43:24.384115  630779 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 19:43:24.391972  630779 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 19:43:24.399932  630779 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 19:43:24.400003  630779 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 19:43:24.407817  630779 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 19:43:24.416219  630779 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 19:43:24.416289  630779 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 19:43:24.424921  630779 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 19:43:24.433294  630779 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 19:43:24.433363  630779 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 19:43:24.440923  630779 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 19:43:24.488625  630779 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1027 19:43:24.489602  630779 kubeadm.go:318] [preflight] Running pre-flight checks
	I1027 19:43:24.513519  630779 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1027 19:43:24.513660  630779 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1027 19:43:24.513734  630779 kubeadm.go:318] OS: Linux
	I1027 19:43:24.513811  630779 kubeadm.go:318] CGROUPS_CPU: enabled
	I1027 19:43:24.513877  630779 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1027 19:43:24.513944  630779 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1027 19:43:24.514035  630779 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1027 19:43:24.514119  630779 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1027 19:43:24.514219  630779 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1027 19:43:24.514296  630779 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1027 19:43:24.514358  630779 kubeadm.go:318] CGROUPS_IO: enabled
	I1027 19:43:24.578725  630779 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 19:43:24.578876  630779 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 19:43:24.579032  630779 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 19:43:24.586697  630779 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 19:43:24.589023  630779 out.go:252]   - Generating certificates and keys ...
	I1027 19:43:24.589103  630779 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1027 19:43:24.589211  630779 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1027 19:43:25.473644  630779 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 19:43:25.757796  630779 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1027 19:43:26.369204  630779 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1027 19:43:26.581906  630779 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1027 19:43:26.870740  630779 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1027 19:43:26.871075  630779 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [kindnet-387383 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1027 19:43:26.937178  630779 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1027 19:43:26.937370  630779 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [kindnet-387383 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1027 19:43:26.999332  630779 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 19:43:27.140747  630779 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 19:43:27.194455  630779 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1027 19:43:27.194562  630779 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 19:43:27.340769  630779 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 19:43:27.579588  630779 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 19:43:27.655006  630779 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 19:43:27.979679  630779 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 19:43:28.187021  630779 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 19:43:28.187775  630779 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 19:43:28.191794  630779 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 19:43:28.194236  630779 out.go:252]   - Booting up control plane ...
	I1027 19:43:28.194401  630779 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 19:43:28.194509  630779 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 19:43:28.194613  630779 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 19:43:28.213178  630779 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 19:43:28.213342  630779 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 19:43:28.220911  630779 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 19:43:28.221093  630779 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 19:43:28.221163  630779 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1027 19:43:28.337627  630779 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 19:43:28.337810  630779 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1027 19:43:24.680549  622136 node_ready.go:57] node "auto-387383" has "Ready":"False" status (will retry)
	W1027 19:43:27.178626  622136 node_ready.go:57] node "auto-387383" has "Ready":"False" status (will retry)
	I1027 19:43:24.137512  631152 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 19:43:33.245284  631152 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1027 19:43:33.245384  631152 kubeadm.go:318] [preflight] Running pre-flight checks
	I1027 19:43:33.245500  631152 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1027 19:43:33.245576  631152 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1027 19:43:33.245636  631152 kubeadm.go:318] OS: Linux
	I1027 19:43:33.245688  631152 kubeadm.go:318] CGROUPS_CPU: enabled
	I1027 19:43:33.245759  631152 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1027 19:43:33.245834  631152 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1027 19:43:33.245928  631152 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1027 19:43:33.246014  631152 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1027 19:43:33.246095  631152 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1027 19:43:33.246203  631152 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1027 19:43:33.246276  631152 kubeadm.go:318] CGROUPS_IO: enabled
	I1027 19:43:33.246378  631152 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 19:43:33.246497  631152 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 19:43:33.246607  631152 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 19:43:33.246697  631152 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 19:43:33.248145  631152 out.go:252]   - Generating certificates and keys ...
	I1027 19:43:33.248251  631152 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1027 19:43:33.248374  631152 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1027 19:43:33.248477  631152 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 19:43:33.248570  631152 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1027 19:43:33.248653  631152 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1027 19:43:33.248747  631152 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1027 19:43:33.248847  631152 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1027 19:43:33.249022  631152 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [calico-387383 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1027 19:43:33.249113  631152 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1027 19:43:33.249309  631152 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [calico-387383 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1027 19:43:33.249404  631152 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 19:43:33.249487  631152 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 19:43:33.249550  631152 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1027 19:43:33.249617  631152 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 19:43:33.249676  631152 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 19:43:33.249746  631152 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 19:43:33.249814  631152 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 19:43:33.249912  631152 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 19:43:33.249971  631152 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 19:43:33.250057  631152 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 19:43:33.250188  631152 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 19:43:33.252877  631152 out.go:252]   - Booting up control plane ...
	I1027 19:43:33.253014  631152 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 19:43:33.253146  631152 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 19:43:33.253243  631152 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 19:43:33.253381  631152 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 19:43:33.253507  631152 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 19:43:33.253680  631152 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 19:43:33.253789  631152 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 19:43:33.253824  631152 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1027 19:43:33.253927  631152 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 19:43:33.254039  631152 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 19:43:33.254117  631152 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001003431s
	I1027 19:43:33.254296  631152 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 19:43:33.254391  631152 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1027 19:43:33.254479  631152 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 19:43:33.254590  631152 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 19:43:33.254679  631152 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.906067512s
	I1027 19:43:33.254772  631152 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.802875251s
	I1027 19:43:33.254860  631152 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.00302778s
	I1027 19:43:33.255044  631152 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 19:43:33.255242  631152 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 19:43:33.255316  631152 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 19:43:33.255542  631152 kubeadm.go:318] [mark-control-plane] Marking the node calico-387383 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 19:43:33.255616  631152 kubeadm.go:318] [bootstrap-token] Using token: uf0sfr.1hmb9njll2ht9b28
	I1027 19:43:33.258025  631152 out.go:252]   - Configuring RBAC rules ...
	I1027 19:43:33.258199  631152 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 19:43:33.258344  631152 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 19:43:33.258479  631152 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 19:43:33.258603  631152 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 19:43:33.258724  631152 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 19:43:33.258833  631152 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 19:43:33.258966  631152 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 19:43:33.259037  631152 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1027 19:43:33.259106  631152 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1027 19:43:33.259116  631152 kubeadm.go:318] 
	I1027 19:43:33.259249  631152 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1027 19:43:33.259263  631152 kubeadm.go:318] 
	I1027 19:43:33.259333  631152 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1027 19:43:33.259341  631152 kubeadm.go:318] 
	I1027 19:43:33.259363  631152 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1027 19:43:33.259413  631152 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 19:43:33.259457  631152 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 19:43:33.259463  631152 kubeadm.go:318] 
	I1027 19:43:33.259529  631152 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1027 19:43:33.259537  631152 kubeadm.go:318] 
	I1027 19:43:33.259589  631152 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 19:43:33.259599  631152 kubeadm.go:318] 
	I1027 19:43:33.259654  631152 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1027 19:43:33.259722  631152 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 19:43:33.259781  631152 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 19:43:33.259793  631152 kubeadm.go:318] 
	I1027 19:43:33.259880  631152 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 19:43:33.259964  631152 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1027 19:43:33.259972  631152 kubeadm.go:318] 
	I1027 19:43:33.260065  631152 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token uf0sfr.1hmb9njll2ht9b28 \
	I1027 19:43:33.260233  631152 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ab29e81999671591f366788f5ae9ffb132789ebc71f7c0efdaecd38575a5ab6a \
	I1027 19:43:33.260269  631152 kubeadm.go:318] 	--control-plane 
	I1027 19:43:33.260278  631152 kubeadm.go:318] 
	I1027 19:43:33.260386  631152 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1027 19:43:33.260395  631152 kubeadm.go:318] 
	I1027 19:43:33.260496  631152 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token uf0sfr.1hmb9njll2ht9b28 \
	I1027 19:43:33.260655  631152 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ab29e81999671591f366788f5ae9ffb132789ebc71f7c0efdaecd38575a5ab6a 
	I1027 19:43:33.260673  631152 cni.go:84] Creating CNI manager for "calico"
	I1027 19:43:33.262573  631152 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I1027 19:43:29.338563  630779 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001046759s
	I1027 19:43:29.344620  630779 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 19:43:29.344779  630779 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1027 19:43:29.344916  630779 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 19:43:29.345049  630779 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 19:43:30.782547  630779 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.437855772s
	I1027 19:43:32.612043  630779 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.267336098s
	I1027 19:43:33.849120  630779 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.5043915s
	I1027 19:43:33.877834  630779 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 19:43:33.892099  630779 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 19:43:33.907259  630779 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 19:43:33.907522  630779 kubeadm.go:318] [mark-control-plane] Marking the node kindnet-387383 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 19:43:33.919683  630779 kubeadm.go:318] [bootstrap-token] Using token: 6mpkpu.9lbdr952x1s4u6wz
	W1027 19:43:29.178967  622136 node_ready.go:57] node "auto-387383" has "Ready":"False" status (will retry)
	W1027 19:43:31.678578  622136 node_ready.go:57] node "auto-387383" has "Ready":"False" status (will retry)
	W1027 19:43:33.678996  622136 node_ready.go:57] node "auto-387383" has "Ready":"False" status (will retry)
	I1027 19:43:33.265015  631152 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 19:43:33.265044  631152 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (539470 bytes)
	I1027 19:43:33.283930  631152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 19:43:33.922357  630779 out.go:252]   - Configuring RBAC rules ...
	I1027 19:43:33.922504  630779 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 19:43:33.928552  630779 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 19:43:33.936693  630779 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 19:43:33.940515  630779 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 19:43:33.944705  630779 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 19:43:33.949615  630779 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 19:43:34.257746  630779 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 19:43:34.679260  630779 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1027 19:43:35.260820  630779 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1027 19:43:35.261956  630779 kubeadm.go:318] 
	I1027 19:43:35.262053  630779 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1027 19:43:35.262069  630779 kubeadm.go:318] 
	I1027 19:43:35.262175  630779 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1027 19:43:35.262190  630779 kubeadm.go:318] 
	I1027 19:43:35.262218  630779 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1027 19:43:35.262295  630779 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 19:43:35.262356  630779 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 19:43:35.262365  630779 kubeadm.go:318] 
	I1027 19:43:35.262463  630779 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1027 19:43:35.262504  630779 kubeadm.go:318] 
	I1027 19:43:35.262561  630779 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 19:43:35.262569  630779 kubeadm.go:318] 
	I1027 19:43:35.262612  630779 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1027 19:43:35.262686  630779 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 19:43:35.262763  630779 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 19:43:35.262774  630779 kubeadm.go:318] 
	I1027 19:43:35.262891  630779 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 19:43:35.263002  630779 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1027 19:43:35.263009  630779 kubeadm.go:318] 
	I1027 19:43:35.263086  630779 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 6mpkpu.9lbdr952x1s4u6wz \
	I1027 19:43:35.263221  630779 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ab29e81999671591f366788f5ae9ffb132789ebc71f7c0efdaecd38575a5ab6a \
	I1027 19:43:35.263259  630779 kubeadm.go:318] 	--control-plane 
	I1027 19:43:35.263269  630779 kubeadm.go:318] 
	I1027 19:43:35.263397  630779 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1027 19:43:35.263414  630779 kubeadm.go:318] 
	I1027 19:43:35.263524  630779 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 6mpkpu.9lbdr952x1s4u6wz \
	I1027 19:43:35.263653  630779 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ab29e81999671591f366788f5ae9ffb132789ebc71f7c0efdaecd38575a5ab6a 
	I1027 19:43:35.266574  630779 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1027 19:43:35.266689  630779 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 19:43:35.266713  630779 cni.go:84] Creating CNI manager for "kindnet"
	I1027 19:43:35.269408  630779 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1027 19:43:34.231872  631152 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 19:43:34.231993  631152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:34.232043  631152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-387383 minikube.k8s.io/updated_at=2025_10_27T19_43_34_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f minikube.k8s.io/name=calico-387383 minikube.k8s.io/primary=true
	I1027 19:43:34.243601  631152 ops.go:34] apiserver oom_adj: -16
	I1027 19:43:34.337620  631152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:34.838295  631152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:35.338385  631152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:35.838015  631152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:36.338693  631152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:36.838559  631152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:37.337916  631152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:37.412803  631152 kubeadm.go:1113] duration metric: took 3.180884094s to wait for elevateKubeSystemPrivileges
	I1027 19:43:37.412843  631152 kubeadm.go:402] duration metric: took 13.558737181s to StartCluster
	I1027 19:43:37.412866  631152 settings.go:142] acquiring lock: {Name:mk8304c2106bf310642e0949fc0266ccb50f2f95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:43:37.412945  631152 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:43:37.414632  631152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/kubeconfig: {Name:mk24cbe512a6907c874f3fb7a85390a8f9fd2b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:43:37.414933  631152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 19:43:37.414944  631152 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 19:43:37.415013  631152 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 19:43:37.415175  631152 addons.go:69] Setting storage-provisioner=true in profile "calico-387383"
	I1027 19:43:37.415193  631152 addons.go:238] Setting addon storage-provisioner=true in "calico-387383"
	I1027 19:43:37.415195  631152 config.go:182] Loaded profile config "calico-387383": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:43:37.415231  631152 host.go:66] Checking if "calico-387383" exists ...
	I1027 19:43:37.415233  631152 addons.go:69] Setting default-storageclass=true in profile "calico-387383"
	I1027 19:43:37.415293  631152 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-387383"
	I1027 19:43:37.415695  631152 cli_runner.go:164] Run: docker container inspect calico-387383 --format={{.State.Status}}
	I1027 19:43:37.415786  631152 cli_runner.go:164] Run: docker container inspect calico-387383 --format={{.State.Status}}
	I1027 19:43:37.416670  631152 out.go:179] * Verifying Kubernetes components...
	I1027 19:43:37.418151  631152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:43:37.440184  631152 addons.go:238] Setting addon default-storageclass=true in "calico-387383"
	I1027 19:43:37.440240  631152 host.go:66] Checking if "calico-387383" exists ...
	I1027 19:43:37.440664  631152 cli_runner.go:164] Run: docker container inspect calico-387383 --format={{.State.Status}}
	I1027 19:43:37.442649  631152 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:43:37.443825  631152 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:43:37.443850  631152 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 19:43:37.443918  631152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-387383
	I1027 19:43:37.478607  631152 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 19:43:37.478635  631152 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 19:43:37.478701  631152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-387383
	I1027 19:43:37.479090  631152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33485 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/calico-387383/id_rsa Username:docker}
	I1027 19:43:37.508427  631152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33485 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/calico-387383/id_rsa Username:docker}
	I1027 19:43:37.520397  631152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 19:43:37.571734  631152 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:43:37.608835  631152 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:43:37.627515  631152 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 19:43:37.718633  631152 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1027 19:43:37.719872  631152 node_ready.go:35] waiting up to 15m0s for node "calico-387383" to be "Ready" ...
	I1027 19:43:38.035661  631152 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1027 19:43:35.271209  630779 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 19:43:35.276008  630779 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 19:43:35.276033  630779 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1027 19:43:35.293423  630779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 19:43:35.550565  630779 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 19:43:35.550676  630779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:35.550694  630779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-387383 minikube.k8s.io/updated_at=2025_10_27T19_43_35_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f minikube.k8s.io/name=kindnet-387383 minikube.k8s.io/primary=true
	I1027 19:43:35.563929  630779 ops.go:34] apiserver oom_adj: -16
	I1027 19:43:35.672614  630779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:36.173261  630779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:36.673691  630779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:37.172754  630779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:37.673367  630779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:38.172750  630779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	Oct 27 19:43:07 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:07.971127992Z" level=info msg="Started container" PID=1729 containerID=018a51229d9e57577826b454b250179e5170284fbbee8eaf8f73bb7ff0106c40 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fdv5r/dashboard-metrics-scraper id=bd0921e0-7182-4c78-b263-8eca15ad155a name=/runtime.v1.RuntimeService/StartContainer sandboxID=514a13049f5ff5ffa0892d6612cd174e20cc3678e3f1016c0cc5d59ac1dc3286
	Oct 27 19:43:08 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:08.949600664Z" level=info msg="Removing container: 52d48213a1788841f147b8597cc6595fef278936c1b92a83552ce357ab8ee3f4" id=fc3e1aed-1b7e-4175-b4e2-c556ccfc43bb name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 19:43:08 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:08.966218534Z" level=info msg="Removed container 52d48213a1788841f147b8597cc6595fef278936c1b92a83552ce357ab8ee3f4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fdv5r/dashboard-metrics-scraper" id=fc3e1aed-1b7e-4175-b4e2-c556ccfc43bb name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 19:43:13 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:13.965775267Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=672e31d6-9d09-4493-8e6f-f904eac4e109 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:43:14 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:14.065207482Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9fbb53f1-8021-4830-b369-8ba4ffaa64f5 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:43:14 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:14.088089324Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=e7a96667-6a93-4975-b053-823e81725da0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:43:14 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:14.088380622Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:43:14 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:14.157554358Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:43:14 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:14.157765088Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/2a50dce02cb9017b495d8fb58b39702bffedcddee2948ee821370d657b7f7f40/merged/etc/passwd: no such file or directory"
	Oct 27 19:43:14 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:14.15778944Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/2a50dce02cb9017b495d8fb58b39702bffedcddee2948ee821370d657b7f7f40/merged/etc/group: no such file or directory"
	Oct 27 19:43:14 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:14.1580072Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:43:14 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:14.430361046Z" level=info msg="Created container aac23a7766ba54465e8372369b0736fdbf5d9242a8ef9f2ac26eedc0aad943f4: kube-system/storage-provisioner/storage-provisioner" id=e7a96667-6a93-4975-b053-823e81725da0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:43:14 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:14.431113694Z" level=info msg="Starting container: aac23a7766ba54465e8372369b0736fdbf5d9242a8ef9f2ac26eedc0aad943f4" id=6024198a-6a69-4735-97fe-c12fa2fa176b name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:43:14 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:14.433213273Z" level=info msg="Started container" PID=1743 containerID=aac23a7766ba54465e8372369b0736fdbf5d9242a8ef9f2ac26eedc0aad943f4 description=kube-system/storage-provisioner/storage-provisioner id=6024198a-6a69-4735-97fe-c12fa2fa176b name=/runtime.v1.RuntimeService/StartContainer sandboxID=a476b9e052022bfa9964afb950b20b1947301431f5ac7c469a956e9b9ed56237
	Oct 27 19:43:28 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:28.819968208Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e5245ccf-255b-4f6d-a1e5-58b535da5ff3 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:43:28 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:28.821293747Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4bc8181b-6fad-462d-a5af-3dcfba7b3c2a name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:43:28 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:28.822948039Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fdv5r/dashboard-metrics-scraper" id=1d0be550-1a41-4615-9f01-4b2747919133 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:43:28 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:28.823157388Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:43:28 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:28.830848877Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:43:28 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:28.83161098Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:43:28 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:28.862448131Z" level=info msg="Created container 73ec8a85e99a5706793ba06e7c17f5889883af7a6fba00f94e2367ec548fda2f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fdv5r/dashboard-metrics-scraper" id=1d0be550-1a41-4615-9f01-4b2747919133 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:43:28 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:28.866852252Z" level=info msg="Starting container: 73ec8a85e99a5706793ba06e7c17f5889883af7a6fba00f94e2367ec548fda2f" id=6322f436-089a-4cea-9239-6e42e9d8247c name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:43:28 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:28.869444761Z" level=info msg="Started container" PID=1779 containerID=73ec8a85e99a5706793ba06e7c17f5889883af7a6fba00f94e2367ec548fda2f description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fdv5r/dashboard-metrics-scraper id=6322f436-089a-4cea-9239-6e42e9d8247c name=/runtime.v1.RuntimeService/StartContainer sandboxID=514a13049f5ff5ffa0892d6612cd174e20cc3678e3f1016c0cc5d59ac1dc3286
	Oct 27 19:43:29 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:29.012695253Z" level=info msg="Removing container: 018a51229d9e57577826b454b250179e5170284fbbee8eaf8f73bb7ff0106c40" id=d598aa05-9bf5-4df1-8096-58eb15ad82ca name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 19:43:29 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:29.027709889Z" level=info msg="Removed container 018a51229d9e57577826b454b250179e5170284fbbee8eaf8f73bb7ff0106c40: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fdv5r/dashboard-metrics-scraper" id=d598aa05-9bf5-4df1-8096-58eb15ad82ca name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	73ec8a85e99a5       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           10 seconds ago      Exited              dashboard-metrics-scraper   3                   514a13049f5ff       dashboard-metrics-scraper-6ffb444bf9-fdv5r             kubernetes-dashboard
	aac23a7766ba5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           25 seconds ago      Running             storage-provisioner         1                   a476b9e052022       storage-provisioner                                    kube-system
	e3cb093a1aa0f       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   46 seconds ago      Running             kubernetes-dashboard        0                   cdf3011352a38       kubernetes-dashboard-855c9754f9-gllsf                  kubernetes-dashboard
	6352f76b57f5e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           56 seconds ago      Running             coredns                     0                   32aa4814e3ccc       coredns-66bc5c9577-d2trp                               kube-system
	3e05d7811de2a       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago      Running             busybox                     1                   46ac295a5c29c       busybox                                                default
	7c615af71a132       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           56 seconds ago      Running             kindnet-cni                 0                   3f74dbff6e12b       kindnet-hhddd                                          kube-system
	a99b69df12664       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   a476b9e052022       storage-provisioner                                    kube-system
	2ad23fa6ba066       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           56 seconds ago      Running             kube-proxy                  0                   a536126784f99       kube-proxy-bldc8                                       kube-system
	d6d42a7474478       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           59 seconds ago      Running             kube-controller-manager     0                   4a489daa30ff0       kube-controller-manager-default-k8s-diff-port-813397   kube-system
	0ef2559af1f10       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           59 seconds ago      Running             etcd                        0                   44420c1add8b1       etcd-default-k8s-diff-port-813397                      kube-system
	9780797653aab       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           59 seconds ago      Running             kube-scheduler              0                   1e79fca034135       kube-scheduler-default-k8s-diff-port-813397            kube-system
	71bc91522e0a3       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           59 seconds ago      Running             kube-apiserver              0                   eca65d870d4f4       kube-apiserver-default-k8s-diff-port-813397            kube-system
	
	
	==> coredns [6352f76b57f5e0e0deff0e7dcd3aff94c185f37edfe63b6b2f233017bcc7468d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38418 - 38297 "HINFO IN 922907106206104028.5101411383343467804. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.12706401s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-813397
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-813397
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=default-k8s-diff-port-813397
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T19_41_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 19:41:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-813397
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 19:43:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 19:43:33 +0000   Mon, 27 Oct 2025 19:41:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 19:43:33 +0000   Mon, 27 Oct 2025 19:41:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 19:43:33 +0000   Mon, 27 Oct 2025 19:41:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 19:43:33 +0000   Mon, 27 Oct 2025 19:42:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-813397
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                7fbc9f19-9330-4688-94ac-b272ce8c2683
	  Boot ID:                    811bd29c-e64e-4acc-9427-bab1f7caed93
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-d2trp                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 etcd-default-k8s-diff-port-813397                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-hhddd                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-default-k8s-diff-port-813397             250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-813397    200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-bldc8                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-default-k8s-diff-port-813397             100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-fdv5r              0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-gllsf                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 109s               kube-proxy       
	  Normal  Starting                 56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  115s               kubelet          Node default-k8s-diff-port-813397 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s               kubelet          Node default-k8s-diff-port-813397 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s               kubelet          Node default-k8s-diff-port-813397 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s               node-controller  Node default-k8s-diff-port-813397 event: Registered Node default-k8s-diff-port-813397 in Controller
	  Normal  NodeReady                99s                kubelet          Node default-k8s-diff-port-813397 status is now: NodeReady
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)  kubelet          Node default-k8s-diff-port-813397 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)  kubelet          Node default-k8s-diff-port-813397 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)  kubelet          Node default-k8s-diff-port-813397 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           54s                node-controller  Node default-k8s-diff-port-813397 event: Registered Node default-k8s-diff-port-813397 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 23 52 43 9a ba 08 06
	[  +0.000398] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 50 95 0e df 53 08 06
	[Oct27 18:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.017295] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023893] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023882] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023851] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +2.047849] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +4.031592] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +8.319143] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[ +16.382183] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[Oct27 19:00] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	
	
	==> etcd [0ef2559af1f1081ff5b055e5ba9d447a5c678b0a1ce12c6cb5f29cf71d5078e4] <==
	{"level":"warn","ts":"2025-10-27T19:42:41.628353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:41.636064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:41.645027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:41.660535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:41.667589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:41.675024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:41.681989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:41.688733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:41.696831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:41.706111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:41.714024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:41.721793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:41.742180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:41.750566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:41.757870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:41.804655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54360","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-27T19:42:46.979779Z","caller":"traceutil/trace.go:172","msg":"trace[2055433803] transaction","detail":"{read_only:false; response_revision:561; number_of_response:1; }","duration":"116.16129ms","start":"2025-10-27T19:42:46.863598Z","end":"2025-10-27T19:42:46.979759Z","steps":["trace[2055433803] 'process raft request'  (duration: 116.054987ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T19:43:12.944243Z","caller":"traceutil/trace.go:172","msg":"trace[743172718] linearizableReadLoop","detail":"{readStateIndex:641; appliedIndex:641; }","duration":"204.434432ms","start":"2025-10-27T19:43:12.739784Z","end":"2025-10-27T19:43:12.944219Z","steps":["trace[743172718] 'read index received'  (duration: 204.425267ms)","trace[743172718] 'applied index is now lower than readState.Index'  (duration: 7.825µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T19:43:12.944548Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"204.736879ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T19:43:12.944632Z","caller":"traceutil/trace.go:172","msg":"trace[2117002860] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:609; }","duration":"204.844651ms","start":"2025-10-27T19:43:12.739777Z","end":"2025-10-27T19:43:12.944621Z","steps":["trace[2117002860] 'agreement among raft nodes before linearized reading'  (duration: 204.699596ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T19:43:12.945300Z","caller":"traceutil/trace.go:172","msg":"trace[959555847] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"208.438594ms","start":"2025-10-27T19:43:12.736844Z","end":"2025-10-27T19:43:12.945283Z","steps":["trace[959555847] 'process raft request'  (duration: 208.10561ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T19:43:13.457977Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"190.948687ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722596663691127125 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-813397\" mod_revision:602 > success:<request_put:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-813397\" value_size:531 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-813397\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-27T19:43:13.458183Z","caller":"traceutil/trace.go:172","msg":"trace[248761841] transaction","detail":"{read_only:false; response_revision:612; number_of_response:1; }","duration":"260.792104ms","start":"2025-10-27T19:43:13.197373Z","end":"2025-10-27T19:43:13.458165Z","steps":["trace[248761841] 'process raft request'  (duration: 68.919787ms)","trace[248761841] 'compare'  (duration: 190.811584ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-27T19:43:14.086917Z","caller":"traceutil/trace.go:172","msg":"trace[311934223] transaction","detail":"{read_only:false; response_revision:614; number_of_response:1; }","duration":"117.662748ms","start":"2025-10-27T19:43:13.969232Z","end":"2025-10-27T19:43:14.086895Z","steps":["trace[311934223] 'process raft request'  (duration: 117.523806ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T19:43:14.130074Z","caller":"traceutil/trace.go:172","msg":"trace[408081185] transaction","detail":"{read_only:false; response_revision:615; number_of_response:1; }","duration":"157.191937ms","start":"2025-10-27T19:43:13.972865Z","end":"2025-10-27T19:43:14.130057Z","steps":["trace[408081185] 'process raft request'  (duration: 157.024284ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:43:39 up  2:26,  0 user,  load average: 5.95, 4.26, 2.61
	Linux default-k8s-diff-port-813397 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7c615af71a1328ed761f08f1b576963f0b4af669a2d38d4c04dcbc67befffac1] <==
	I1027 19:42:43.454415       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 19:42:43.454692       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1027 19:42:43.454882       1 main.go:148] setting mtu 1500 for CNI 
	I1027 19:42:43.454902       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 19:42:43.454928       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T19:42:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 19:42:43.658211       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 19:42:43.658305       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 19:42:43.658318       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 19:42:43.659560       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1027 19:42:44.052672       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 19:42:44.052708       1 metrics.go:72] Registering metrics
	I1027 19:42:44.052803       1 controller.go:711] "Syncing nftables rules"
	I1027 19:42:53.658242       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 19:42:53.658307       1 main.go:301] handling current node
	I1027 19:43:03.658640       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 19:43:03.658694       1 main.go:301] handling current node
	I1027 19:43:13.658383       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 19:43:13.658444       1 main.go:301] handling current node
	I1027 19:43:23.658609       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 19:43:23.658648       1 main.go:301] handling current node
	I1027 19:43:33.658936       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 19:43:33.658977       1 main.go:301] handling current node
	
	
	==> kube-apiserver [71bc91522e0a38092dcf74ebe27051d01aa77c65b02d1f845740c5a57c74c29b] <==
	I1027 19:42:42.312399       1 aggregator.go:171] initial CRD sync complete...
	I1027 19:42:42.312410       1 autoregister_controller.go:144] Starting autoregister controller
	I1027 19:42:42.312417       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 19:42:42.312424       1 cache.go:39] Caches are synced for autoregister controller
	I1027 19:42:42.311374       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1027 19:42:42.316185       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1027 19:42:42.319374       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 19:42:42.326779       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1027 19:42:42.326864       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 19:42:42.327958       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1027 19:42:42.327987       1 policy_source.go:240] refreshing policies
	I1027 19:42:42.357761       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 19:42:42.607753       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 19:42:42.650120       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 19:42:42.677574       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 19:42:42.685127       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 19:42:42.693093       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 19:42:42.738420       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.13.176"}
	I1027 19:42:42.753877       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.67.124"}
	I1027 19:42:43.215644       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 19:42:46.115658       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 19:42:46.115711       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 19:42:46.167387       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 19:42:46.215349       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 19:42:46.215349       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [d6d42a747447887cf7cfddbb910c2d92aff06ed6741847fd2f5efa19ba0e6533] <==
	I1027 19:42:45.623985       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1027 19:42:45.626893       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 19:42:45.631230       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 19:42:45.632462       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 19:42:45.635705       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 19:42:45.661294       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1027 19:42:45.661323       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1027 19:42:45.661331       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 19:42:45.661382       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 19:42:45.661385       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 19:42:45.661448       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 19:42:45.661727       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1027 19:42:45.662463       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 19:42:45.668268       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1027 19:42:45.668346       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 19:42:45.669424       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1027 19:42:45.670641       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1027 19:42:45.671894       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1027 19:42:45.676290       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 19:42:45.679006       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 19:42:45.681465       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1027 19:42:45.683844       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 19:42:45.688212       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1027 19:42:45.688308       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 19:42:45.691726       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	
	
	==> kube-proxy [2ad23fa6ba06688254490ad382551b5850d3c01b455056ac3570cd76e67f3b13] <==
	I1027 19:42:43.242591       1 server_linux.go:53] "Using iptables proxy"
	I1027 19:42:43.355459       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 19:42:43.456422       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 19:42:43.456469       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1027 19:42:43.456569       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 19:42:43.474954       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 19:42:43.475025       1 server_linux.go:132] "Using iptables Proxier"
	I1027 19:42:43.480222       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 19:42:43.480642       1 server.go:527] "Version info" version="v1.34.1"
	I1027 19:42:43.480671       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:42:43.482162       1 config.go:200] "Starting service config controller"
	I1027 19:42:43.482189       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 19:42:43.482225       1 config.go:106] "Starting endpoint slice config controller"
	I1027 19:42:43.482233       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 19:42:43.482265       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 19:42:43.482290       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 19:42:43.482371       1 config.go:309] "Starting node config controller"
	I1027 19:42:43.482391       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 19:42:43.582385       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 19:42:43.582387       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 19:42:43.582408       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 19:42:43.582498       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [9780797653aab1b99e5b8a7975532cff7b3a72af97330b8012e4e50b4dadbfde] <==
	I1027 19:42:42.251472       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 19:42:42.251637       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:42:42.254629       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:42:42.254678       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:42:42.255059       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 19:42:42.255225       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1027 19:42:42.258935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1027 19:42:42.260891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 19:42:42.261019       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 19:42:42.261088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 19:42:42.265851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 19:42:42.266233       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 19:42:42.266347       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 19:42:42.266417       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 19:42:42.266478       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 19:42:42.266556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 19:42:42.267328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 19:42:42.268322       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 19:42:42.271243       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 19:42:42.271517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 19:42:42.271639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 19:42:42.272029       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 19:42:42.272186       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 19:42:42.272356       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1027 19:42:43.455496       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 19:42:46 default-k8s-diff-port-813397 kubelet[726]: I1027 19:42:46.379150     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/460f77f5-a4eb-4992-a7b0-1413ca2d33c1-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-gllsf\" (UID: \"460f77f5-a4eb-4992-a7b0-1413ca2d33c1\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gllsf"
	Oct 27 19:42:46 default-k8s-diff-port-813397 kubelet[726]: I1027 19:42:46.379252     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccnjg\" (UniqueName: \"kubernetes.io/projected/460f77f5-a4eb-4992-a7b0-1413ca2d33c1-kube-api-access-ccnjg\") pod \"kubernetes-dashboard-855c9754f9-gllsf\" (UID: \"460f77f5-a4eb-4992-a7b0-1413ca2d33c1\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gllsf"
	Oct 27 19:42:52 default-k8s-diff-port-813397 kubelet[726]: I1027 19:42:52.360500     726 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 27 19:42:55 default-k8s-diff-port-813397 kubelet[726]: I1027 19:42:55.760841     726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gllsf" podStartSLOduration=3.678249895 podStartE2EDuration="9.760813538s" podCreationTimestamp="2025-10-27 19:42:46 +0000 UTC" firstStartedPulling="2025-10-27 19:42:46.861109268 +0000 UTC m=+7.141001258" lastFinishedPulling="2025-10-27 19:42:52.943672906 +0000 UTC m=+13.223564901" observedRunningTime="2025-10-27 19:42:53.901459229 +0000 UTC m=+14.181351232" watchObservedRunningTime="2025-10-27 19:42:55.760813538 +0000 UTC m=+16.040705541"
	Oct 27 19:42:56 default-k8s-diff-port-813397 kubelet[726]: I1027 19:42:56.902303     726 scope.go:117] "RemoveContainer" containerID="8c2b6060feb1135b54f6456af74c20816936e9cf5ea1ffe21c88e1f46d1af198"
	Oct 27 19:42:57 default-k8s-diff-port-813397 kubelet[726]: I1027 19:42:57.907071     726 scope.go:117] "RemoveContainer" containerID="8c2b6060feb1135b54f6456af74c20816936e9cf5ea1ffe21c88e1f46d1af198"
	Oct 27 19:42:57 default-k8s-diff-port-813397 kubelet[726]: I1027 19:42:57.907235     726 scope.go:117] "RemoveContainer" containerID="52d48213a1788841f147b8597cc6595fef278936c1b92a83552ce357ab8ee3f4"
	Oct 27 19:42:57 default-k8s-diff-port-813397 kubelet[726]: E1027 19:42:57.907416     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fdv5r_kubernetes-dashboard(48945846-3a22-4b08-ac60-4568409f1c83)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fdv5r" podUID="48945846-3a22-4b08-ac60-4568409f1c83"
	Oct 27 19:42:58 default-k8s-diff-port-813397 kubelet[726]: I1027 19:42:58.912885     726 scope.go:117] "RemoveContainer" containerID="52d48213a1788841f147b8597cc6595fef278936c1b92a83552ce357ab8ee3f4"
	Oct 27 19:42:58 default-k8s-diff-port-813397 kubelet[726]: E1027 19:42:58.913084     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fdv5r_kubernetes-dashboard(48945846-3a22-4b08-ac60-4568409f1c83)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fdv5r" podUID="48945846-3a22-4b08-ac60-4568409f1c83"
	Oct 27 19:43:07 default-k8s-diff-port-813397 kubelet[726]: I1027 19:43:07.902554     726 scope.go:117] "RemoveContainer" containerID="52d48213a1788841f147b8597cc6595fef278936c1b92a83552ce357ab8ee3f4"
	Oct 27 19:43:08 default-k8s-diff-port-813397 kubelet[726]: I1027 19:43:08.945698     726 scope.go:117] "RemoveContainer" containerID="52d48213a1788841f147b8597cc6595fef278936c1b92a83552ce357ab8ee3f4"
	Oct 27 19:43:08 default-k8s-diff-port-813397 kubelet[726]: I1027 19:43:08.945985     726 scope.go:117] "RemoveContainer" containerID="018a51229d9e57577826b454b250179e5170284fbbee8eaf8f73bb7ff0106c40"
	Oct 27 19:43:08 default-k8s-diff-port-813397 kubelet[726]: E1027 19:43:08.946205     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fdv5r_kubernetes-dashboard(48945846-3a22-4b08-ac60-4568409f1c83)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fdv5r" podUID="48945846-3a22-4b08-ac60-4568409f1c83"
	Oct 27 19:43:13 default-k8s-diff-port-813397 kubelet[726]: I1027 19:43:13.965323     726 scope.go:117] "RemoveContainer" containerID="a99b69df126644d4ba34b740a14a250d74ff8e1c6a80b438411dfe1669fada08"
	Oct 27 19:43:17 default-k8s-diff-port-813397 kubelet[726]: I1027 19:43:17.902047     726 scope.go:117] "RemoveContainer" containerID="018a51229d9e57577826b454b250179e5170284fbbee8eaf8f73bb7ff0106c40"
	Oct 27 19:43:17 default-k8s-diff-port-813397 kubelet[726]: E1027 19:43:17.902354     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fdv5r_kubernetes-dashboard(48945846-3a22-4b08-ac60-4568409f1c83)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fdv5r" podUID="48945846-3a22-4b08-ac60-4568409f1c83"
	Oct 27 19:43:28 default-k8s-diff-port-813397 kubelet[726]: I1027 19:43:28.819329     726 scope.go:117] "RemoveContainer" containerID="018a51229d9e57577826b454b250179e5170284fbbee8eaf8f73bb7ff0106c40"
	Oct 27 19:43:29 default-k8s-diff-port-813397 kubelet[726]: I1027 19:43:29.011231     726 scope.go:117] "RemoveContainer" containerID="018a51229d9e57577826b454b250179e5170284fbbee8eaf8f73bb7ff0106c40"
	Oct 27 19:43:29 default-k8s-diff-port-813397 kubelet[726]: I1027 19:43:29.011492     726 scope.go:117] "RemoveContainer" containerID="73ec8a85e99a5706793ba06e7c17f5889883af7a6fba00f94e2367ec548fda2f"
	Oct 27 19:43:29 default-k8s-diff-port-813397 kubelet[726]: E1027 19:43:29.011850     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fdv5r_kubernetes-dashboard(48945846-3a22-4b08-ac60-4568409f1c83)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fdv5r" podUID="48945846-3a22-4b08-ac60-4568409f1c83"
	Oct 27 19:43:36 default-k8s-diff-port-813397 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 19:43:36 default-k8s-diff-port-813397 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 19:43:36 default-k8s-diff-port-813397 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 27 19:43:36 default-k8s-diff-port-813397 systemd[1]: kubelet.service: Consumed 2.017s CPU time.
	
	
	==> kubernetes-dashboard [e3cb093a1aa0f1c554cd5ee66a4a34809e2ef72e9a8a48c1a6c6e48763472af4] <==
	2025/10/27 19:42:53 Starting overwatch
	2025/10/27 19:42:53 Using namespace: kubernetes-dashboard
	2025/10/27 19:42:53 Using in-cluster config to connect to apiserver
	2025/10/27 19:42:53 Using secret token for csrf signing
	2025/10/27 19:42:53 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/27 19:42:53 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/27 19:42:53 Successful initial request to the apiserver, version: v1.34.1
	2025/10/27 19:42:53 Generating JWE encryption key
	2025/10/27 19:42:53 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/27 19:42:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/27 19:42:53 Initializing JWE encryption key from synchronized object
	2025/10/27 19:42:53 Creating in-cluster Sidecar client
	2025/10/27 19:42:53 Serving insecurely on HTTP port: 9090
	2025/10/27 19:42:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 19:43:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [a99b69df126644d4ba34b740a14a250d74ff8e1c6a80b438411dfe1669fada08] <==
	I1027 19:42:43.211375       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1027 19:43:13.213601       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [aac23a7766ba54465e8372369b0736fdbf5d9242a8ef9f2ac26eedc0aad943f4] <==
	I1027 19:43:14.444471       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1027 19:43:14.452682       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 19:43:14.452721       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1027 19:43:14.454940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:43:17.909926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:43:22.171061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:43:25.769386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:43:28.824639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:43:31.848315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:43:31.854450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 19:43:31.854685       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 19:43:31.854906       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-813397_7253d2a6-9e8d-4078-9636-f5a8ce6ed6af!
	I1027 19:43:31.856222       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cbed91f6-01d4-484d-a71d-80aad634d779", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-813397_7253d2a6-9e8d-4078-9636-f5a8ce6ed6af became leader
	W1027 19:43:31.859904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:43:31.874147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 19:43:31.955726       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-813397_7253d2a6-9e8d-4078-9636-f5a8ce6ed6af!
	W1027 19:43:33.881929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:43:33.887779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:43:35.892348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:43:35.899556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:43:37.903288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:43:37.908615       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:43:39.917605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:43:39.929335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-813397 -n default-k8s-diff-port-813397
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-813397 -n default-k8s-diff-port-813397: exit status 2 (439.774407ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-813397 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-813397
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-813397:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5e2892d7a5b76c311108e309f0c5e79b46c633c41881cd99a81040580e9d6de8",
	        "Created": "2025-10-27T19:41:28.530867062Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 616539,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-27T19:42:33.304221255Z",
	            "FinishedAt": "2025-10-27T19:42:32.338526273Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/5e2892d7a5b76c311108e309f0c5e79b46c633c41881cd99a81040580e9d6de8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5e2892d7a5b76c311108e309f0c5e79b46c633c41881cd99a81040580e9d6de8/hostname",
	        "HostsPath": "/var/lib/docker/containers/5e2892d7a5b76c311108e309f0c5e79b46c633c41881cd99a81040580e9d6de8/hosts",
	        "LogPath": "/var/lib/docker/containers/5e2892d7a5b76c311108e309f0c5e79b46c633c41881cd99a81040580e9d6de8/5e2892d7a5b76c311108e309f0c5e79b46c633c41881cd99a81040580e9d6de8-json.log",
	        "Name": "/default-k8s-diff-port-813397",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-813397:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-813397",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5e2892d7a5b76c311108e309f0c5e79b46c633c41881cd99a81040580e9d6de8",
	                "LowerDir": "/var/lib/docker/overlay2/9c29b2ca181e37783386969900349b6f8ee825583f284e5f7ca2046e8e79ccce-init/diff:/var/lib/docker/overlay2/71b61ec94610a35f2d924dec358052d4c154c36b3fe219802f60246ca2dc7f45/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9c29b2ca181e37783386969900349b6f8ee825583f284e5f7ca2046e8e79ccce/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9c29b2ca181e37783386969900349b6f8ee825583f284e5f7ca2046e8e79ccce/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9c29b2ca181e37783386969900349b6f8ee825583f284e5f7ca2046e8e79ccce/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-813397",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-813397/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-813397",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-813397",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-813397",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "567b0cbd081b2e0d2b2d47ab8f135996ad55d4b1699c1507ee06fc68e4766c6d",
	            "SandboxKey": "/var/run/docker/netns/567b0cbd081b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33466"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33469"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33467"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33468"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-813397": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0a:9a:ad:0c:6e:6e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "e5c60f1f40aedba9b9761254cb4dc4ea11830e317d7c1ef05baf77a39a5733c7",
	                    "EndpointID": "9830f48004fd4be26a7e2a151d943b78fa6929c3fc664fdeb23e9dca31037e85",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-813397",
	                        "5e2892d7a5b7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-813397 -n default-k8s-diff-port-813397
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-813397 -n default-k8s-diff-port-813397: exit status 2 (387.18791ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-813397 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-813397 logs -n 25: (1.42904424s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p newest-cni-677710 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-677710            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-813397 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-813397 │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-813397 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-813397 │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-813397 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-813397 │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ start   │ -p default-k8s-diff-port-813397 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-813397 │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:43 UTC │
	│ image   │ no-preload-095885 image list --format=json                                                                                                                                                                                                    │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ pause   │ -p no-preload-095885 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-677710 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-677710            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	│ stop    │ -p newest-cni-677710 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-677710            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ delete  │ -p no-preload-095885                                                                                                                                                                                                                          │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ delete  │ -p no-preload-095885                                                                                                                                                                                                                          │ no-preload-095885            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ start   │ -p auto-387383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-387383                  │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-677710 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-677710            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ start   │ -p newest-cni-677710 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-677710            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ start   │ -p kubernetes-upgrade-360986 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-360986    │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	│ image   │ newest-cni-677710 image list --format=json                                                                                                                                                                                                    │ newest-cni-677710            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:42 UTC │
	│ start   │ -p kubernetes-upgrade-360986 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-360986    │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │ 27 Oct 25 19:43 UTC │
	│ pause   │ -p newest-cni-677710 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-677710            │ jenkins │ v1.37.0 │ 27 Oct 25 19:42 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-360986                                                                                                                                                                                                                  │ kubernetes-upgrade-360986    │ jenkins │ v1.37.0 │ 27 Oct 25 19:43 UTC │ 27 Oct 25 19:43 UTC │
	│ delete  │ -p newest-cni-677710                                                                                                                                                                                                                          │ newest-cni-677710            │ jenkins │ v1.37.0 │ 27 Oct 25 19:43 UTC │ 27 Oct 25 19:43 UTC │
	│ start   │ -p kindnet-387383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                      │ kindnet-387383               │ jenkins │ v1.37.0 │ 27 Oct 25 19:43 UTC │                     │
	│ delete  │ -p newest-cni-677710                                                                                                                                                                                                                          │ newest-cni-677710            │ jenkins │ v1.37.0 │ 27 Oct 25 19:43 UTC │ 27 Oct 25 19:43 UTC │
	│ start   │ -p calico-387383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio                                                                                                        │ calico-387383                │ jenkins │ v1.37.0 │ 27 Oct 25 19:43 UTC │                     │
	│ image   │ default-k8s-diff-port-813397 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-813397 │ jenkins │ v1.37.0 │ 27 Oct 25 19:43 UTC │ 27 Oct 25 19:43 UTC │
	│ pause   │ -p default-k8s-diff-port-813397 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-813397 │ jenkins │ v1.37.0 │ 27 Oct 25 19:43 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 19:43:09
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 19:43:09.098655  631152 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:43:09.099062  631152 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:43:09.099083  631152 out.go:374] Setting ErrFile to fd 2...
	I1027 19:43:09.099092  631152 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:43:09.099941  631152 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 19:43:09.100698  631152 out.go:368] Setting JSON to false
	I1027 19:43:09.102496  631152 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8738,"bootTime":1761585451,"procs":412,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 19:43:09.102649  631152 start.go:141] virtualization: kvm guest
	I1027 19:43:09.105368  631152 out.go:179] * [calico-387383] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 19:43:09.106947  631152 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 19:43:09.106951  631152 notify.go:220] Checking for updates...
	I1027 19:43:09.108399  631152 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 19:43:09.109941  631152 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:43:09.111708  631152 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-352833/.minikube
	I1027 19:43:09.113045  631152 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 19:43:09.114409  631152 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 19:43:09.116649  631152 config.go:182] Loaded profile config "auto-387383": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:43:09.116917  631152 config.go:182] Loaded profile config "default-k8s-diff-port-813397": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:43:09.117178  631152 config.go:182] Loaded profile config "kindnet-387383": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:43:09.117406  631152 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 19:43:09.144130  631152 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 19:43:09.144274  631152 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:43:09.219804  631152 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:57 OomKillDisable:false NGoroutines:69 SystemTime:2025-10-27 19:43:09.208452606 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:43:09.219968  631152 docker.go:318] overlay module found
	I1027 19:43:09.224829  631152 out.go:179] * Using the docker driver based on user configuration
	I1027 19:43:09.229063  631152 start.go:305] selected driver: docker
	I1027 19:43:09.229087  631152 start.go:925] validating driver "docker" against <nil>
	I1027 19:43:09.229099  631152 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 19:43:09.229761  631152 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:43:09.297708  631152 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:61 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-27 19:43:09.284768991 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:43:09.297923  631152 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1027 19:43:09.298177  631152 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1027 19:43:09.300298  631152 out.go:179] * Using Docker driver with root privileges
	I1027 19:43:09.301552  631152 cni.go:84] Creating CNI manager for "calico"
	I1027 19:43:09.301572  631152 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1027 19:43:09.301666  631152 start.go:349] cluster config:
	{Name:calico-387383 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-387383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:43:09.303049  631152 out.go:179] * Starting "calico-387383" primary control-plane node in "calico-387383" cluster
	I1027 19:43:09.304322  631152 cache.go:123] Beginning downloading kic base image for docker with crio
	I1027 19:43:09.305655  631152 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1027 19:43:09.307008  631152 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:43:09.307040  631152 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 19:43:09.307072  631152 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1027 19:43:09.307087  631152 cache.go:58] Caching tarball of preloaded images
	I1027 19:43:09.307227  631152 preload.go:233] Found /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1027 19:43:09.307243  631152 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1027 19:43:09.307348  631152 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/config.json ...
	I1027 19:43:09.307379  631152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/config.json: {Name:mk43f6d9384d0a21bf6f72b0ca8f08435e9c8cc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:43:09.330570  631152 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1027 19:43:09.330592  631152 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1027 19:43:09.330613  631152 cache.go:232] Successfully downloaded all kic artifacts
	I1027 19:43:09.330651  631152 start.go:360] acquireMachinesLock for calico-387383: {Name:mka12b625ec8304f9dc2737a01f90cd5d174feff Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1027 19:43:09.330765  631152 start.go:364] duration metric: took 95.18µs to acquireMachinesLock for "calico-387383"
	I1027 19:43:09.330796  631152 start.go:93] Provisioning new machine with config: &{Name:calico-387383 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-387383 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 19:43:09.330874  631152 start.go:125] createHost starting for "" (driver="docker")
	W1027 19:43:08.920465  616341 pod_ready.go:104] pod "coredns-66bc5c9577-d2trp" is not "Ready", error: <nil>
	W1027 19:43:11.417729  616341 pod_ready.go:104] pod "coredns-66bc5c9577-d2trp" is not "Ready", error: <nil>
	I1027 19:43:08.974203  622136 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 19:43:08.974311  622136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:08.974368  622136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-387383 minikube.k8s.io/updated_at=2025_10_27T19_43_08_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f minikube.k8s.io/name=auto-387383 minikube.k8s.io/primary=true
	I1027 19:43:09.080630  622136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:09.080651  622136 ops.go:34] apiserver oom_adj: -16
	I1027 19:43:09.580959  622136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:10.081633  622136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:10.581241  622136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:11.081076  622136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:11.580724  622136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:12.081063  622136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:12.580913  622136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:13.081600  622136 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:13.188668  622136 kubeadm.go:1113] duration metric: took 4.214426883s to wait for elevateKubeSystemPrivileges
	I1027 19:43:13.188707  622136 kubeadm.go:402] duration metric: took 16.879812759s to StartCluster
	I1027 19:43:13.188734  622136 settings.go:142] acquiring lock: {Name:mk8304c2106bf310642e0949fc0266ccb50f2f95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:43:13.188808  622136 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:43:13.190211  622136 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/kubeconfig: {Name:mk24cbe512a6907c874f3fb7a85390a8f9fd2b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:43:13.265755  622136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 19:43:13.265780  622136 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 19:43:13.265878  622136 addons.go:69] Setting storage-provisioner=true in profile "auto-387383"
	I1027 19:43:13.265904  622136 addons.go:238] Setting addon storage-provisioner=true in "auto-387383"
	I1027 19:43:13.265743  622136 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 19:43:13.265914  622136 addons.go:69] Setting default-storageclass=true in profile "auto-387383"
	I1027 19:43:13.265937  622136 host.go:66] Checking if "auto-387383" exists ...
	I1027 19:43:13.265950  622136 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-387383"
	I1027 19:43:13.266526  622136 cli_runner.go:164] Run: docker container inspect auto-387383 --format={{.State.Status}}
	I1027 19:43:13.266557  622136 cli_runner.go:164] Run: docker container inspect auto-387383 --format={{.State.Status}}
	I1027 19:43:13.266820  622136 config.go:182] Loaded profile config "auto-387383": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:43:13.291397  622136 addons.go:238] Setting addon default-storageclass=true in "auto-387383"
	I1027 19:43:13.291451  622136 host.go:66] Checking if "auto-387383" exists ...
	I1027 19:43:13.291952  622136 cli_runner.go:164] Run: docker container inspect auto-387383 --format={{.State.Status}}
	I1027 19:43:13.315427  622136 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 19:43:13.315461  622136 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 19:43:13.315573  622136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-387383
	I1027 19:43:13.338266  622136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33475 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/auto-387383/id_rsa Username:docker}
	I1027 19:43:13.392825  622136 out.go:179] * Verifying Kubernetes components...
	I1027 19:43:13.392855  622136 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:43:08.817185  630779 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1027 19:43:08.817491  630779 start.go:159] libmachine.API.Create for "kindnet-387383" (driver="docker")
	I1027 19:43:08.817535  630779 client.go:168] LocalClient.Create starting
	I1027 19:43:08.817647  630779 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem
	I1027 19:43:08.817695  630779 main.go:141] libmachine: Decoding PEM data...
	I1027 19:43:08.817721  630779 main.go:141] libmachine: Parsing certificate...
	I1027 19:43:08.817804  630779 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem
	I1027 19:43:08.817841  630779 main.go:141] libmachine: Decoding PEM data...
	I1027 19:43:08.817856  630779 main.go:141] libmachine: Parsing certificate...
	I1027 19:43:08.818316  630779 cli_runner.go:164] Run: docker network inspect kindnet-387383 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 19:43:08.839177  630779 cli_runner.go:211] docker network inspect kindnet-387383 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 19:43:08.839282  630779 network_create.go:284] running [docker network inspect kindnet-387383] to gather additional debugging logs...
	I1027 19:43:08.839312  630779 cli_runner.go:164] Run: docker network inspect kindnet-387383
	W1027 19:43:08.862708  630779 cli_runner.go:211] docker network inspect kindnet-387383 returned with exit code 1
	I1027 19:43:08.862748  630779 network_create.go:287] error running [docker network inspect kindnet-387383]: docker network inspect kindnet-387383: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-387383 not found
	I1027 19:43:08.862766  630779 network_create.go:289] output of [docker network inspect kindnet-387383]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-387383 not found
	
	** /stderr **
	I1027 19:43:08.862951  630779 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 19:43:08.887850  630779 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-04e197bde7e8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:8c:cb:7c:68:31} reservation:<nil>}
	I1027 19:43:08.888795  630779 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e37fd2b092bc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:26:98:e3:c0:d9:8a} reservation:<nil>}
	I1027 19:43:08.889481  630779 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-bbd9ae70d20d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ea:7f:4f:eb:e4:a1} reservation:<nil>}
	I1027 19:43:08.890205  630779 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-20cd7dbe58eb IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:96:9c:3e:02:15:d8} reservation:<nil>}
	I1027 19:43:08.890784  630779 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-e5c60f1f40ae IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:6a:1e:24:48:2b:2f} reservation:<nil>}
	I1027 19:43:08.891510  630779 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f1e860}
	I1027 19:43:08.891543  630779 network_create.go:124] attempt to create docker network kindnet-387383 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1027 19:43:08.891607  630779 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-387383 kindnet-387383
	I1027 19:43:08.982459  630779 network_create.go:108] docker network kindnet-387383 192.168.94.0/24 created
	I1027 19:43:08.982496  630779 kic.go:121] calculated static IP "192.168.94.2" for the "kindnet-387383" container
	I1027 19:43:08.982584  630779 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 19:43:09.009857  630779 cli_runner.go:164] Run: docker volume create kindnet-387383 --label name.minikube.sigs.k8s.io=kindnet-387383 --label created_by.minikube.sigs.k8s.io=true
	I1027 19:43:09.037352  630779 oci.go:103] Successfully created a docker volume kindnet-387383
	I1027 19:43:09.037457  630779 cli_runner.go:164] Run: docker run --rm --name kindnet-387383-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-387383 --entrypoint /usr/bin/test -v kindnet-387383:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 19:43:09.547006  630779 oci.go:107] Successfully prepared a docker volume kindnet-387383
	I1027 19:43:09.547067  630779 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:43:09.547097  630779 kic.go:194] Starting extracting preloaded images to volume ...
	I1027 19:43:09.547228  630779 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-387383:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1027 19:43:13.465237  622136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 19:43:13.527655  622136 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:43:13.527685  622136 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 19:43:13.527747  622136 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-387383
	I1027 19:43:13.527661  622136 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:43:13.548661  622136 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33475 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/auto-387383/id_rsa Username:docker}
	I1027 19:43:13.613018  622136 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 19:43:13.674620  622136 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:43:09.332868  631152 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1027 19:43:09.333169  631152 start.go:159] libmachine.API.Create for "calico-387383" (driver="docker")
	I1027 19:43:09.333207  631152 client.go:168] LocalClient.Create starting
	I1027 19:43:09.333292  631152 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem
	I1027 19:43:09.333346  631152 main.go:141] libmachine: Decoding PEM data...
	I1027 19:43:09.333372  631152 main.go:141] libmachine: Parsing certificate...
	I1027 19:43:09.333459  631152 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem
	I1027 19:43:09.333491  631152 main.go:141] libmachine: Decoding PEM data...
	I1027 19:43:09.333503  631152 main.go:141] libmachine: Parsing certificate...
	I1027 19:43:09.333943  631152 cli_runner.go:164] Run: docker network inspect calico-387383 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1027 19:43:09.356477  631152 cli_runner.go:211] docker network inspect calico-387383 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1027 19:43:09.356594  631152 network_create.go:284] running [docker network inspect calico-387383] to gather additional debugging logs...
	I1027 19:43:09.356624  631152 cli_runner.go:164] Run: docker network inspect calico-387383
	W1027 19:43:09.377833  631152 cli_runner.go:211] docker network inspect calico-387383 returned with exit code 1
	I1027 19:43:09.377885  631152 network_create.go:287] error running [docker network inspect calico-387383]: docker network inspect calico-387383: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-387383 not found
	I1027 19:43:09.377909  631152 network_create.go:289] output of [docker network inspect calico-387383]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-387383 not found
	
	** /stderr **
	I1027 19:43:09.378072  631152 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 19:43:09.399669  631152 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-04e197bde7e8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:8c:cb:7c:68:31} reservation:<nil>}
	I1027 19:43:09.400481  631152 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e37fd2b092bc IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:26:98:e3:c0:d9:8a} reservation:<nil>}
	I1027 19:43:09.400979  631152 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-bbd9ae70d20d IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ea:7f:4f:eb:e4:a1} reservation:<nil>}
	I1027 19:43:09.401653  631152 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-20cd7dbe58eb IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:96:9c:3e:02:15:d8} reservation:<nil>}
	I1027 19:43:09.402241  631152 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-e5c60f1f40ae IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:6a:1e:24:48:2b:2f} reservation:<nil>}
	I1027 19:43:09.402949  631152 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-9609e5410315 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:8a:7e:34:6e:27:1e} reservation:<nil>}
	I1027 19:43:09.403864  631152 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f08a50}
	I1027 19:43:09.403887  631152 network_create.go:124] attempt to create docker network calico-387383 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1027 19:43:09.403942  631152 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-387383 calico-387383
	I1027 19:43:09.475070  631152 network_create.go:108] docker network calico-387383 192.168.103.0/24 created
	I1027 19:43:09.475112  631152 kic.go:121] calculated static IP "192.168.103.2" for the "calico-387383" container
	I1027 19:43:09.475213  631152 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1027 19:43:09.495750  631152 cli_runner.go:164] Run: docker volume create calico-387383 --label name.minikube.sigs.k8s.io=calico-387383 --label created_by.minikube.sigs.k8s.io=true
	I1027 19:43:09.517894  631152 oci.go:103] Successfully created a docker volume calico-387383
	I1027 19:43:09.518011  631152 cli_runner.go:164] Run: docker run --rm --name calico-387383-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-387383 --entrypoint /usr/bin/test -v calico-387383:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1027 19:43:10.423477  631152 oci.go:107] Successfully prepared a docker volume calico-387383
	I1027 19:43:10.423540  631152 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:43:10.423567  631152 kic.go:194] Starting extracting preloaded images to volume ...
	I1027 19:43:10.423658  631152 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-387383:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1027 19:43:13.945644  622136 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:43:14.246692  622136 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1027 19:43:15.037751  622136 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-387383" context rescaled to 1 replicas
	I1027 19:43:15.674041  622136 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.999376115s)
	I1027 19:43:15.674102  622136 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.7284134s)
	I1027 19:43:15.675048  622136 node_ready.go:35] waiting up to 15m0s for node "auto-387383" to be "Ready" ...
	I1027 19:43:15.804289  622136 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1027 19:43:13.465928  616341 pod_ready.go:104] pod "coredns-66bc5c9577-d2trp" is not "Ready", error: <nil>
	W1027 19:43:15.917385  616341 pod_ready.go:104] pod "coredns-66bc5c9577-d2trp" is not "Ready", error: <nil>
	I1027 19:43:15.836937  630779 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-387383:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (6.289650323s)
	I1027 19:43:15.836977  630779 kic.go:203] duration metric: took 6.289877797s to extract preloaded images to volume ...
	W1027 19:43:15.837071  630779 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1027 19:43:15.837105  630779 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1027 19:43:15.837173  630779 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1027 19:43:15.914749  630779 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kindnet-387383 --name kindnet-387383 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-387383 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kindnet-387383 --network kindnet-387383 --ip 192.168.94.2 --volume kindnet-387383:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1027 19:43:16.230686  630779 cli_runner.go:164] Run: docker container inspect kindnet-387383 --format={{.State.Running}}
	I1027 19:43:16.251881  630779 cli_runner.go:164] Run: docker container inspect kindnet-387383 --format={{.State.Status}}
	I1027 19:43:16.272639  630779 cli_runner.go:164] Run: docker exec kindnet-387383 stat /var/lib/dpkg/alternatives/iptables
	I1027 19:43:16.325967  630779 oci.go:144] the created container "kindnet-387383" has a running status.
	I1027 19:43:16.326030  630779 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21801-352833/.minikube/machines/kindnet-387383/id_rsa...
	I1027 19:43:16.397472  630779 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21801-352833/.minikube/machines/kindnet-387383/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1027 19:43:16.434338  630779 cli_runner.go:164] Run: docker container inspect kindnet-387383 --format={{.State.Status}}
	I1027 19:43:16.455490  630779 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1027 19:43:16.455511  630779 kic_runner.go:114] Args: [docker exec --privileged kindnet-387383 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1027 19:43:16.523822  630779 cli_runner.go:164] Run: docker container inspect kindnet-387383 --format={{.State.Status}}
	I1027 19:43:16.550257  630779 machine.go:93] provisionDockerMachine start ...
	I1027 19:43:16.550373  630779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-387383
	I1027 19:43:16.573820  630779 main.go:141] libmachine: Using SSH client type: native
	I1027 19:43:16.574147  630779 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33480 <nil> <nil>}
	I1027 19:43:16.574170  630779 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 19:43:16.575088  630779 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34346->127.0.0.1:33480: read: connection reset by peer
	I1027 19:43:15.806705  622136 addons.go:514] duration metric: took 2.540909161s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1027 19:43:17.678675  622136 node_ready.go:57] node "auto-387383" has "Ready":"False" status (will retry)
	I1027 19:43:15.934880  631152 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-387383:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (5.511161254s)
	I1027 19:43:15.934918  631152 kic.go:203] duration metric: took 5.511345731s to extract preloaded images to volume ...
	W1027 19:43:15.935080  631152 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1027 19:43:15.935124  631152 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1027 19:43:15.935199  631152 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1027 19:43:16.003629  631152 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-387383 --name calico-387383 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-387383 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-387383 --network calico-387383 --ip 192.168.103.2 --volume calico-387383:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1027 19:43:16.352100  631152 cli_runner.go:164] Run: docker container inspect calico-387383 --format={{.State.Running}}
	I1027 19:43:16.375495  631152 cli_runner.go:164] Run: docker container inspect calico-387383 --format={{.State.Status}}
	I1027 19:43:16.403289  631152 cli_runner.go:164] Run: docker exec calico-387383 stat /var/lib/dpkg/alternatives/iptables
	I1027 19:43:16.456486  631152 oci.go:144] the created container "calico-387383" has a running status.
	I1027 19:43:16.456566  631152 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21801-352833/.minikube/machines/calico-387383/id_rsa...
	I1027 19:43:16.537928  631152 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21801-352833/.minikube/machines/calico-387383/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1027 19:43:16.568927  631152 cli_runner.go:164] Run: docker container inspect calico-387383 --format={{.State.Status}}
	I1027 19:43:16.594442  631152 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1027 19:43:16.594469  631152 kic_runner.go:114] Args: [docker exec --privileged calico-387383 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1027 19:43:16.654339  631152 cli_runner.go:164] Run: docker container inspect calico-387383 --format={{.State.Status}}
	I1027 19:43:16.679232  631152 machine.go:93] provisionDockerMachine start ...
	I1027 19:43:16.679342  631152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-387383
	I1027 19:43:16.706550  631152 main.go:141] libmachine: Using SSH client type: native
	I1027 19:43:16.706929  631152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33485 <nil> <nil>}
	I1027 19:43:16.706951  631152 main.go:141] libmachine: About to run SSH command:
	hostname
	I1027 19:43:16.707765  631152 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44622->127.0.0.1:33485: read: connection reset by peer
	I1027 19:43:19.856595  631152 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-387383
	
	I1027 19:43:19.856635  631152 ubuntu.go:182] provisioning hostname "calico-387383"
	I1027 19:43:19.856738  631152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-387383
	I1027 19:43:19.877187  631152 main.go:141] libmachine: Using SSH client type: native
	I1027 19:43:19.877424  631152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33485 <nil> <nil>}
	I1027 19:43:19.877441  631152 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-387383 && echo "calico-387383" | sudo tee /etc/hostname
	I1027 19:43:20.032968  631152 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-387383
	
	I1027 19:43:20.033057  631152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-387383
	I1027 19:43:20.053188  631152 main.go:141] libmachine: Using SSH client type: native
	I1027 19:43:20.053429  631152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33485 <nil> <nil>}
	I1027 19:43:20.053446  631152 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-387383' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-387383/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-387383' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 19:43:20.197291  631152 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 19:43:20.197325  631152 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21801-352833/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-352833/.minikube}
	I1027 19:43:20.197353  631152 ubuntu.go:190] setting up certificates
	I1027 19:43:20.197364  631152 provision.go:84] configureAuth start
	I1027 19:43:20.197433  631152 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-387383
	I1027 19:43:20.217002  631152 provision.go:143] copyHostCerts
	I1027 19:43:20.217072  631152 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem, removing ...
	I1027 19:43:20.217087  631152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem
	I1027 19:43:20.217193  631152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem (1078 bytes)
	I1027 19:43:20.217313  631152 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem, removing ...
	I1027 19:43:20.217330  631152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem
	I1027 19:43:20.217354  631152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem (1123 bytes)
	I1027 19:43:20.217425  631152 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem, removing ...
	I1027 19:43:20.217432  631152 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem
	I1027 19:43:20.217450  631152 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem (1679 bytes)
	I1027 19:43:20.217563  631152 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem org=jenkins.calico-387383 san=[127.0.0.1 192.168.103.2 calico-387383 localhost minikube]
	I1027 19:43:20.418403  631152 provision.go:177] copyRemoteCerts
	I1027 19:43:20.418458  631152 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 19:43:20.418511  631152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-387383
	I1027 19:43:20.439233  631152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33485 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/calico-387383/id_rsa Username:docker}
	I1027 19:43:20.542376  631152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 19:43:20.564091  631152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1027 19:43:20.584859  631152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 19:43:20.606040  631152 provision.go:87] duration metric: took 408.66026ms to configureAuth
	I1027 19:43:20.606083  631152 ubuntu.go:206] setting minikube options for container-runtime
	I1027 19:43:20.606356  631152 config.go:182] Loaded profile config "calico-387383": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:43:20.606475  631152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-387383
	I1027 19:43:20.626268  631152 main.go:141] libmachine: Using SSH client type: native
	I1027 19:43:20.626568  631152 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33485 <nil> <nil>}
	I1027 19:43:20.626604  631152 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 19:43:20.890051  631152 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 19:43:20.890095  631152 machine.go:96] duration metric: took 4.210822487s to provisionDockerMachine
	I1027 19:43:20.890106  631152 client.go:171] duration metric: took 11.556890851s to LocalClient.Create
	I1027 19:43:20.890127  631152 start.go:167] duration metric: took 11.556960745s to libmachine.API.Create "calico-387383"
	I1027 19:43:20.890154  631152 start.go:293] postStartSetup for "calico-387383" (driver="docker")
	I1027 19:43:20.890168  631152 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 19:43:20.890231  631152 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 19:43:20.890284  631152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-387383
	I1027 19:43:20.910483  631152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33485 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/calico-387383/id_rsa Username:docker}
	I1027 19:43:21.018526  631152 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 19:43:21.022867  631152 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 19:43:21.022904  631152 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 19:43:21.022917  631152 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-352833/.minikube/addons for local assets ...
	I1027 19:43:21.022985  631152 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-352833/.minikube/files for local assets ...
	I1027 19:43:21.023107  631152 filesync.go:149] local asset: /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem -> 3564152.pem in /etc/ssl/certs
	I1027 19:43:21.023265  631152 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 19:43:21.032414  631152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem --> /etc/ssl/certs/3564152.pem (1708 bytes)
	I1027 19:43:21.055283  631152 start.go:296] duration metric: took 165.110581ms for postStartSetup
	I1027 19:43:21.055681  631152 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-387383
	I1027 19:43:21.076627  631152 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/config.json ...
	I1027 19:43:21.076926  631152 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 19:43:21.076972  631152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-387383
	I1027 19:43:21.097385  631152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33485 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/calico-387383/id_rsa Username:docker}
	I1027 19:43:21.197560  631152 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 19:43:21.202675  631152 start.go:128] duration metric: took 11.871778512s to createHost
	I1027 19:43:21.202706  631152 start.go:83] releasing machines lock for "calico-387383", held for 11.871926694s
	I1027 19:43:21.202790  631152 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-387383
	I1027 19:43:21.223949  631152 ssh_runner.go:195] Run: cat /version.json
	I1027 19:43:21.224039  631152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-387383
	I1027 19:43:21.224042  631152 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 19:43:21.224129  631152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-387383
	I1027 19:43:21.244834  631152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33485 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/calico-387383/id_rsa Username:docker}
	I1027 19:43:21.246461  631152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33485 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/calico-387383/id_rsa Username:docker}
	I1027 19:43:21.412070  631152 ssh_runner.go:195] Run: systemctl --version
	I1027 19:43:21.420361  631152 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 19:43:21.459800  631152 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 19:43:21.464983  631152 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 19:43:21.465041  631152 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 19:43:21.494149  631152 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1027 19:43:21.494179  631152 start.go:495] detecting cgroup driver to use...
	I1027 19:43:21.494213  631152 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 19:43:21.494255  631152 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 19:43:21.511679  631152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 19:43:21.524918  631152 docker.go:218] disabling cri-docker service (if available) ...
	I1027 19:43:21.524971  631152 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 19:43:21.542994  631152 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 19:43:21.563418  631152 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 19:43:21.659259  631152 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 19:43:21.758619  631152 docker.go:234] disabling docker service ...
	I1027 19:43:21.758694  631152 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 19:43:21.783969  631152 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 19:43:21.798452  631152 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 19:43:21.898798  631152 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 19:43:21.998106  631152 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 19:43:22.015909  631152 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 19:43:22.031653  631152 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 19:43:22.031720  631152 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:43:22.043307  631152 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 19:43:22.043374  631152 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:43:22.054367  631152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:43:22.064498  631152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:43:22.076286  631152 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 19:43:22.085897  631152 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:43:22.099405  631152 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:43:22.116718  631152 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:43:22.127488  631152 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 19:43:22.136493  631152 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 19:43:22.145795  631152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:43:22.236425  631152 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 19:43:22.354757  631152 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 19:43:22.354829  631152 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 19:43:22.359412  631152 start.go:563] Will wait 60s for crictl version
	I1027 19:43:22.359472  631152 ssh_runner.go:195] Run: which crictl
	I1027 19:43:22.363706  631152 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 19:43:22.395607  631152 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 19:43:22.395703  631152 ssh_runner.go:195] Run: crio --version
	I1027 19:43:22.431702  631152 ssh_runner.go:195] Run: crio --version
	I1027 19:43:22.469338  631152 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1027 19:43:19.719593  630779 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-387383
	
	I1027 19:43:19.719650  630779 ubuntu.go:182] provisioning hostname "kindnet-387383"
	I1027 19:43:19.719742  630779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-387383
	I1027 19:43:19.741526  630779 main.go:141] libmachine: Using SSH client type: native
	I1027 19:43:19.741757  630779 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33480 <nil> <nil>}
	I1027 19:43:19.741771  630779 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-387383 && echo "kindnet-387383" | sudo tee /etc/hostname
	I1027 19:43:19.898294  630779 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-387383
	
	I1027 19:43:19.898383  630779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-387383
	I1027 19:43:19.919072  630779 main.go:141] libmachine: Using SSH client type: native
	I1027 19:43:19.919376  630779 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33480 <nil> <nil>}
	I1027 19:43:19.919408  630779 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-387383' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-387383/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-387383' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1027 19:43:20.066062  630779 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1027 19:43:20.066095  630779 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21801-352833/.minikube CaCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21801-352833/.minikube}
	I1027 19:43:20.066120  630779 ubuntu.go:190] setting up certificates
	I1027 19:43:20.066146  630779 provision.go:84] configureAuth start
	I1027 19:43:20.066215  630779 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-387383
	I1027 19:43:20.086947  630779 provision.go:143] copyHostCerts
	I1027 19:43:20.087022  630779 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem, removing ...
	I1027 19:43:20.087035  630779 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem
	I1027 19:43:20.087103  630779 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/ca.pem (1078 bytes)
	I1027 19:43:20.087314  630779 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem, removing ...
	I1027 19:43:20.087331  630779 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem
	I1027 19:43:20.087367  630779 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/cert.pem (1123 bytes)
	I1027 19:43:20.087431  630779 exec_runner.go:144] found /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem, removing ...
	I1027 19:43:20.087438  630779 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem
	I1027 19:43:20.087462  630779 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21801-352833/.minikube/key.pem (1679 bytes)
	I1027 19:43:20.087521  630779 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem org=jenkins.kindnet-387383 san=[127.0.0.1 192.168.94.2 kindnet-387383 localhost minikube]
	I1027 19:43:20.557101  630779 provision.go:177] copyRemoteCerts
	I1027 19:43:20.557203  630779 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1027 19:43:20.557252  630779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-387383
	I1027 19:43:20.577798  630779 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33480 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/kindnet-387383/id_rsa Username:docker}
	I1027 19:43:20.682080  630779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1027 19:43:20.703570  630779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1027 19:43:20.723214  630779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1027 19:43:20.743034  630779 provision.go:87] duration metric: took 676.870448ms to configureAuth
	I1027 19:43:20.743071  630779 ubuntu.go:206] setting minikube options for container-runtime
	I1027 19:43:20.743290  630779 config.go:182] Loaded profile config "kindnet-387383": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:43:20.743410  630779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-387383
	I1027 19:43:20.762593  630779 main.go:141] libmachine: Using SSH client type: native
	I1027 19:43:20.762878  630779 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x841760] 0x844460 <nil>  [] 0s} 127.0.0.1 33480 <nil> <nil>}
	I1027 19:43:20.762899  630779 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1027 19:43:21.030290  630779 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1027 19:43:21.030324  630779 machine.go:96] duration metric: took 4.480039911s to provisionDockerMachine
	I1027 19:43:21.030338  630779 client.go:171] duration metric: took 12.212791881s to LocalClient.Create
	I1027 19:43:21.030362  630779 start.go:167] duration metric: took 12.212872727s to libmachine.API.Create "kindnet-387383"
	I1027 19:43:21.030372  630779 start.go:293] postStartSetup for "kindnet-387383" (driver="docker")
	I1027 19:43:21.030384  630779 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1027 19:43:21.030460  630779 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1027 19:43:21.030523  630779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-387383
	I1027 19:43:21.050743  630779 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33480 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/kindnet-387383/id_rsa Username:docker}
	I1027 19:43:21.155355  630779 ssh_runner.go:195] Run: cat /etc/os-release
	I1027 19:43:21.159584  630779 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1027 19:43:21.159624  630779 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1027 19:43:21.159637  630779 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-352833/.minikube/addons for local assets ...
	I1027 19:43:21.159704  630779 filesync.go:126] Scanning /home/jenkins/minikube-integration/21801-352833/.minikube/files for local assets ...
	I1027 19:43:21.159819  630779 filesync.go:149] local asset: /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem -> 3564152.pem in /etc/ssl/certs
	I1027 19:43:21.159979  630779 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1027 19:43:21.168867  630779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem --> /etc/ssl/certs/3564152.pem (1708 bytes)
	I1027 19:43:21.191858  630779 start.go:296] duration metric: took 161.468893ms for postStartSetup
	I1027 19:43:21.192229  630779 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-387383
	I1027 19:43:21.211761  630779 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/config.json ...
	I1027 19:43:21.212170  630779 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 19:43:21.212235  630779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-387383
	I1027 19:43:21.236005  630779 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33480 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/kindnet-387383/id_rsa Username:docker}
	I1027 19:43:21.339702  630779 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1027 19:43:21.345006  630779 start.go:128] duration metric: took 12.530050109s to createHost
	I1027 19:43:21.345038  630779 start.go:83] releasing machines lock for "kindnet-387383", held for 12.530215173s
	I1027 19:43:21.345121  630779 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-387383
	I1027 19:43:21.365270  630779 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1027 19:43:21.365326  630779 ssh_runner.go:195] Run: cat /version.json
	I1027 19:43:21.365378  630779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-387383
	I1027 19:43:21.365426  630779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-387383
	I1027 19:43:21.386361  630779 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33480 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/kindnet-387383/id_rsa Username:docker}
	I1027 19:43:21.386733  630779 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33480 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/kindnet-387383/id_rsa Username:docker}
	I1027 19:43:21.563887  630779 ssh_runner.go:195] Run: systemctl --version
	I1027 19:43:21.570989  630779 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1027 19:43:21.615251  630779 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1027 19:43:21.620437  630779 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1027 19:43:21.620514  630779 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1027 19:43:21.647793  630779 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1027 19:43:21.647833  630779 start.go:495] detecting cgroup driver to use...
	I1027 19:43:21.647874  630779 detect.go:190] detected "systemd" cgroup driver on host os
	I1027 19:43:21.647939  630779 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1027 19:43:21.668017  630779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1027 19:43:21.682051  630779 docker.go:218] disabling cri-docker service (if available) ...
	I1027 19:43:21.682119  630779 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1027 19:43:21.705209  630779 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1027 19:43:21.724729  630779 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1027 19:43:21.814826  630779 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1027 19:43:21.923398  630779 docker.go:234] disabling docker service ...
	I1027 19:43:21.923478  630779 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1027 19:43:21.948096  630779 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1027 19:43:21.963361  630779 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1027 19:43:22.059636  630779 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1027 19:43:22.155384  630779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1027 19:43:22.170522  630779 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1027 19:43:22.191386  630779 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1027 19:43:22.191444  630779 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:43:22.203419  630779 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1027 19:43:22.203497  630779 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:43:22.214478  630779 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:43:22.224940  630779 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:43:22.235818  630779 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1027 19:43:22.245339  630779 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:43:22.256385  630779 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:43:22.272844  630779 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1027 19:43:22.283854  630779 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1027 19:43:22.293236  630779 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1027 19:43:22.302285  630779 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:43:22.400841  630779 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1027 19:43:22.517558  630779 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1027 19:43:22.517637  630779 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1027 19:43:22.522369  630779 start.go:563] Will wait 60s for crictl version
	I1027 19:43:22.522437  630779 ssh_runner.go:195] Run: which crictl
	I1027 19:43:22.526820  630779 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1027 19:43:22.554707  630779 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1027 19:43:22.554779  630779 ssh_runner.go:195] Run: crio --version
	I1027 19:43:22.586801  630779 ssh_runner.go:195] Run: crio --version
	I1027 19:43:22.623795  630779 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1027 19:43:18.417602  616341 pod_ready.go:104] pod "coredns-66bc5c9577-d2trp" is not "Ready", error: <nil>
	W1027 19:43:20.418375  616341 pod_ready.go:104] pod "coredns-66bc5c9577-d2trp" is not "Ready", error: <nil>
	I1027 19:43:22.418998  616341 pod_ready.go:94] pod "coredns-66bc5c9577-d2trp" is "Ready"
	I1027 19:43:22.419035  616341 pod_ready.go:86] duration metric: took 38.507791483s for pod "coredns-66bc5c9577-d2trp" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:43:22.422396  616341 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-813397" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:43:22.427833  616341 pod_ready.go:94] pod "etcd-default-k8s-diff-port-813397" is "Ready"
	I1027 19:43:22.427863  616341 pod_ready.go:86] duration metric: took 5.434462ms for pod "etcd-default-k8s-diff-port-813397" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:43:22.430801  616341 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-813397" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:43:22.435963  616341 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-813397" is "Ready"
	I1027 19:43:22.435999  616341 pod_ready.go:86] duration metric: took 5.170955ms for pod "kube-apiserver-default-k8s-diff-port-813397" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:43:22.438570  616341 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-813397" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:43:22.615605  616341 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-813397" is "Ready"
	I1027 19:43:22.615650  616341 pod_ready.go:86] duration metric: took 177.051825ms for pod "kube-controller-manager-default-k8s-diff-port-813397" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:43:22.816514  616341 pod_ready.go:83] waiting for pod "kube-proxy-bldc8" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:43:22.625216  630779 cli_runner.go:164] Run: docker network inspect kindnet-387383 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 19:43:22.644772  630779 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1027 19:43:22.648923  630779 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 19:43:22.660098  630779 kubeadm.go:883] updating cluster {Name:kindnet-387383 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-387383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 19:43:22.660242  630779 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:43:22.660286  630779 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 19:43:22.698154  630779 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 19:43:22.698177  630779 crio.go:433] Images already preloaded, skipping extraction
	I1027 19:43:22.698224  630779 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 19:43:22.726913  630779 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 19:43:22.726943  630779 cache_images.go:85] Images are preloaded, skipping loading
	I1027 19:43:22.726954  630779 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1027 19:43:22.727065  630779 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-387383 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kindnet-387383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1027 19:43:22.727175  630779 ssh_runner.go:195] Run: crio config
	I1027 19:43:22.792710  630779 cni.go:84] Creating CNI manager for "kindnet"
	I1027 19:43:22.792744  630779 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 19:43:22.792778  630779 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-387383 NodeName:kindnet-387383 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 19:43:22.792916  630779 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-387383"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 19:43:22.793026  630779 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 19:43:22.802393  630779 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 19:43:22.802458  630779 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 19:43:22.812098  630779 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I1027 19:43:22.826897  630779 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 19:43:22.846034  630779 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1027 19:43:22.861978  630779 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1027 19:43:22.867028  630779 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 19:43:22.880030  630779 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:43:22.980911  630779 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:43:23.008304  630779 certs.go:69] Setting up /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383 for IP: 192.168.94.2
	I1027 19:43:23.008329  630779 certs.go:195] generating shared ca certs ...
	I1027 19:43:23.008352  630779 certs.go:227] acquiring lock for ca certs: {Name:mk4bdbca32068f6f817fc35fdc496e961dc3e0d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:43:23.008530  630779 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21801-352833/.minikube/ca.key
	I1027 19:43:23.008591  630779 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.key
	I1027 19:43:23.008612  630779 certs.go:257] generating profile certs ...
	I1027 19:43:23.008682  630779 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/client.key
	I1027 19:43:23.008700  630779 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/client.crt with IP's: []
	I1027 19:43:23.280372  630779 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/client.crt ...
	I1027 19:43:23.280468  630779 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/client.crt: {Name:mkc5cdc763554b6306b0c8faa7cf27304253c7b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:43:23.280651  630779 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/client.key ...
	I1027 19:43:23.280668  630779 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/client.key: {Name:mk883d68ae1f564089d4a6589f22eb59db09b659 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:43:23.280775  630779 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/apiserver.key.035362ec
	I1027 19:43:23.280800  630779 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/apiserver.crt.035362ec with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1027 19:43:23.215971  616341 pod_ready.go:94] pod "kube-proxy-bldc8" is "Ready"
	I1027 19:43:23.216004  616341 pod_ready.go:86] duration metric: took 399.460648ms for pod "kube-proxy-bldc8" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:43:23.417054  616341 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-813397" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:43:23.815597  616341 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-813397" is "Ready"
	I1027 19:43:23.815631  616341 pod_ready.go:86] duration metric: took 398.552014ms for pod "kube-scheduler-default-k8s-diff-port-813397" in "kube-system" namespace to be "Ready" or be gone ...
	I1027 19:43:23.815644  616341 pod_ready.go:40] duration metric: took 39.910056182s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1027 19:43:23.867820  616341 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1027 19:43:23.870183  616341 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-813397" cluster and "default" namespace by default
	W1027 19:43:20.179062  622136 node_ready.go:57] node "auto-387383" has "Ready":"False" status (will retry)
	W1027 19:43:22.183338  622136 node_ready.go:57] node "auto-387383" has "Ready":"False" status (will retry)
	I1027 19:43:22.470830  631152 cli_runner.go:164] Run: docker network inspect calico-387383 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1027 19:43:22.490027  631152 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1027 19:43:22.495175  631152 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 19:43:22.507697  631152 kubeadm.go:883] updating cluster {Name:calico-387383 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-387383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1027 19:43:22.507836  631152 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1027 19:43:22.507879  631152 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 19:43:22.543147  631152 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 19:43:22.543171  631152 crio.go:433] Images already preloaded, skipping extraction
	I1027 19:43:22.543232  631152 ssh_runner.go:195] Run: sudo crictl images --output json
	I1027 19:43:22.573557  631152 crio.go:514] all images are preloaded for cri-o runtime.
	I1027 19:43:22.573587  631152 cache_images.go:85] Images are preloaded, skipping loading
	I1027 19:43:22.573597  631152 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1027 19:43:22.573717  631152 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-387383 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:calico-387383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1027 19:43:22.573817  631152 ssh_runner.go:195] Run: crio config
	I1027 19:43:22.625217  631152 cni.go:84] Creating CNI manager for "calico"
	I1027 19:43:22.625247  631152 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1027 19:43:22.625276  631152 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-387383 NodeName:calico-387383 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1027 19:43:22.625434  631152 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-387383"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1027 19:43:22.625495  631152 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1027 19:43:22.634278  631152 binaries.go:44] Found k8s binaries, skipping transfer
	I1027 19:43:22.634350  631152 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1027 19:43:22.643250  631152 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (364 bytes)
	I1027 19:43:22.657812  631152 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1027 19:43:22.676259  631152 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2212 bytes)
	I1027 19:43:22.692916  631152 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1027 19:43:22.697371  631152 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1027 19:43:22.709020  631152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:43:22.809735  631152 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:43:22.839913  631152 certs.go:69] Setting up /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383 for IP: 192.168.103.2
	I1027 19:43:22.839941  631152 certs.go:195] generating shared ca certs ...
	I1027 19:43:22.839964  631152 certs.go:227] acquiring lock for ca certs: {Name:mk4bdbca32068f6f817fc35fdc496e961dc3e0d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:43:22.840124  631152 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21801-352833/.minikube/ca.key
	I1027 19:43:22.840199  631152 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.key
	I1027 19:43:22.840212  631152 certs.go:257] generating profile certs ...
	I1027 19:43:22.840278  631152 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/client.key
	I1027 19:43:22.840315  631152 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/client.crt with IP's: []
	I1027 19:43:23.067367  631152 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/client.crt ...
	I1027 19:43:23.067406  631152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/client.crt: {Name:mk11afdb6b68f3344d9356c14824a16d6455b940 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:43:23.067634  631152 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/client.key ...
	I1027 19:43:23.067649  631152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/client.key: {Name:mk49ed5c3ce77620018f632b1ea9e8ac53ba2830 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:43:23.067758  631152 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/apiserver.key.ba71f923
	I1027 19:43:23.067779  631152 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/apiserver.crt.ba71f923 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1027 19:43:23.279834  631152 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/apiserver.crt.ba71f923 ...
	I1027 19:43:23.279866  631152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/apiserver.crt.ba71f923: {Name:mke7b18960a6ef12bc322a1683e081c39a475326 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:43:23.280083  631152 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/apiserver.key.ba71f923 ...
	I1027 19:43:23.280148  631152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/apiserver.key.ba71f923: {Name:mk74cdf59cd5a60002af90895cc36d350b8e8acb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:43:23.280279  631152 certs.go:382] copying /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/apiserver.crt.ba71f923 -> /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/apiserver.crt
	I1027 19:43:23.280399  631152 certs.go:386] copying /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/apiserver.key.ba71f923 -> /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/apiserver.key
	I1027 19:43:23.280653  631152 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/proxy-client.key
	I1027 19:43:23.280678  631152 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/proxy-client.crt with IP's: []
	I1027 19:43:23.394828  631152 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/proxy-client.crt ...
	I1027 19:43:23.394864  631152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/proxy-client.crt: {Name:mke44f35ad317a0aae3a2a25c289c25d96b92520 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:43:23.395097  631152 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/proxy-client.key ...
	I1027 19:43:23.395119  631152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/proxy-client.key: {Name:mka46a31bf5c94f35a4cbf64d912bf69a96af663 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:43:23.395399  631152 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/356415.pem (1338 bytes)
	W1027 19:43:23.395447  631152 certs.go:480] ignoring /home/jenkins/minikube-integration/21801-352833/.minikube/certs/356415_empty.pem, impossibly tiny 0 bytes
	I1027 19:43:23.395464  631152 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 19:43:23.395496  631152 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem (1078 bytes)
	I1027 19:43:23.395525  631152 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem (1123 bytes)
	I1027 19:43:23.395562  631152 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem (1679 bytes)
	I1027 19:43:23.395619  631152 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem (1708 bytes)
	I1027 19:43:23.396252  631152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 19:43:23.420838  631152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 19:43:23.440802  631152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 19:43:23.459990  631152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1027 19:43:23.479913  631152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1027 19:43:23.500072  631152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 19:43:23.521515  631152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 19:43:23.542836  631152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/calico-387383/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1027 19:43:23.563387  631152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem --> /usr/share/ca-certificates/3564152.pem (1708 bytes)
	I1027 19:43:23.586867  631152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 19:43:23.606875  631152 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/certs/356415.pem --> /usr/share/ca-certificates/356415.pem (1338 bytes)
	I1027 19:43:23.627985  631152 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 19:43:23.642297  631152 ssh_runner.go:195] Run: openssl version
	I1027 19:43:23.649658  631152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3564152.pem && ln -fs /usr/share/ca-certificates/3564152.pem /etc/ssl/certs/3564152.pem"
	I1027 19:43:23.659825  631152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3564152.pem
	I1027 19:43:23.664435  631152 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 19:02 /usr/share/ca-certificates/3564152.pem
	I1027 19:43:23.664511  631152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3564152.pem
	I1027 19:43:23.702799  631152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3564152.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 19:43:23.714511  631152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 19:43:23.725054  631152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:43:23.730727  631152 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:43:23.730797  631152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:43:23.771650  631152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 19:43:23.781460  631152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/356415.pem && ln -fs /usr/share/ca-certificates/356415.pem /etc/ssl/certs/356415.pem"
	I1027 19:43:23.791062  631152 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356415.pem
	I1027 19:43:23.795616  631152 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 19:02 /usr/share/ca-certificates/356415.pem
	I1027 19:43:23.795680  631152 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356415.pem
	I1027 19:43:23.839576  631152 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/356415.pem /etc/ssl/certs/51391683.0"
	I1027 19:43:23.849374  631152 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 19:43:23.854051  631152 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 19:43:23.854113  631152 kubeadm.go:400] StartCluster: {Name:calico-387383 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-387383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:43:23.854219  631152 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 19:43:23.854290  631152 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 19:43:23.887737  631152 cri.go:89] found id: ""
	I1027 19:43:23.887817  631152 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 19:43:23.903148  631152 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 19:43:23.911879  631152 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1027 19:43:23.911956  631152 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 19:43:23.921562  631152 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 19:43:23.921580  631152 kubeadm.go:157] found existing configuration files:
	
	I1027 19:43:23.921629  631152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 19:43:23.931181  631152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 19:43:23.931235  631152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 19:43:23.940349  631152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 19:43:23.949821  631152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 19:43:23.949872  631152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 19:43:23.958805  631152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 19:43:23.969243  631152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 19:43:23.969315  631152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 19:43:23.978576  631152 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 19:43:23.988621  631152 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 19:43:23.988685  631152 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 19:43:23.998961  631152 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 19:43:24.070980  631152 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1027 19:43:23.684121  630779 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/apiserver.crt.035362ec ...
	I1027 19:43:23.684160  630779 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/apiserver.crt.035362ec: {Name:mkac5cc64507c3ad048c7d49e398887e77ecec0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:43:23.684404  630779 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/apiserver.key.035362ec ...
	I1027 19:43:23.684425  630779 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/apiserver.key.035362ec: {Name:mk60c42bc6264d59b9a6ac8cbee89223248b7d7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:43:23.684537  630779 certs.go:382] copying /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/apiserver.crt.035362ec -> /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/apiserver.crt
	I1027 19:43:23.684646  630779 certs.go:386] copying /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/apiserver.key.035362ec -> /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/apiserver.key
	I1027 19:43:23.684731  630779 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/proxy-client.key
	I1027 19:43:23.684751  630779 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/proxy-client.crt with IP's: []
	I1027 19:43:23.830266  630779 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/proxy-client.crt ...
	I1027 19:43:23.830310  630779 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/proxy-client.crt: {Name:mkef0d2b3404a5e128d7881dec6d699ea82a73c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:43:23.830539  630779 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/proxy-client.key ...
	I1027 19:43:23.830565  630779 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/proxy-client.key: {Name:mk4633005cf6602ccf6ea736710ef9b598373d05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:43:23.830836  630779 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/356415.pem (1338 bytes)
	W1027 19:43:23.830923  630779 certs.go:480] ignoring /home/jenkins/minikube-integration/21801-352833/.minikube/certs/356415_empty.pem, impossibly tiny 0 bytes
	I1027 19:43:23.830938  630779 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca-key.pem (1675 bytes)
	I1027 19:43:23.830974  630779 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/ca.pem (1078 bytes)
	I1027 19:43:23.831019  630779 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/cert.pem (1123 bytes)
	I1027 19:43:23.831052  630779 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/certs/key.pem (1679 bytes)
	I1027 19:43:23.831111  630779 certs.go:484] found cert: /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem (1708 bytes)
	I1027 19:43:23.831781  630779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1027 19:43:23.853509  630779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1027 19:43:23.874759  630779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1027 19:43:23.899954  630779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1027 19:43:23.921532  630779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1027 19:43:23.942346  630779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1027 19:43:23.965787  630779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1027 19:43:23.988343  630779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kindnet-387383/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1027 19:43:24.011636  630779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/ssl/certs/3564152.pem --> /usr/share/ca-certificates/3564152.pem (1708 bytes)
	I1027 19:43:24.035513  630779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1027 19:43:24.058580  630779 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21801-352833/.minikube/certs/356415.pem --> /usr/share/ca-certificates/356415.pem (1338 bytes)
	I1027 19:43:24.080867  630779 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1027 19:43:24.096288  630779 ssh_runner.go:195] Run: openssl version
	I1027 19:43:24.103716  630779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3564152.pem && ln -fs /usr/share/ca-certificates/3564152.pem /etc/ssl/certs/3564152.pem"
	I1027 19:43:24.114015  630779 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3564152.pem
	I1027 19:43:24.119280  630779 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 27 19:02 /usr/share/ca-certificates/3564152.pem
	I1027 19:43:24.119352  630779 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3564152.pem
	I1027 19:43:24.161780  630779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3564152.pem /etc/ssl/certs/3ec20f2e.0"
	I1027 19:43:24.172593  630779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1027 19:43:24.186031  630779 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:43:24.192119  630779 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 27 18:56 /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:43:24.192268  630779 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1027 19:43:24.237686  630779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1027 19:43:24.247867  630779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/356415.pem && ln -fs /usr/share/ca-certificates/356415.pem /etc/ssl/certs/356415.pem"
	I1027 19:43:24.257623  630779 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/356415.pem
	I1027 19:43:24.262356  630779 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 27 19:02 /usr/share/ca-certificates/356415.pem
	I1027 19:43:24.262454  630779 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/356415.pem
	I1027 19:43:24.302894  630779 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/356415.pem /etc/ssl/certs/51391683.0"
	I1027 19:43:24.312688  630779 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1027 19:43:24.316629  630779 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1027 19:43:24.316699  630779 kubeadm.go:400] StartCluster: {Name:kindnet-387383 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-387383 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:43:24.316792  630779 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1027 19:43:24.316845  630779 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1027 19:43:24.348336  630779 cri.go:89] found id: ""
	I1027 19:43:24.348416  630779 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1027 19:43:24.357107  630779 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1027 19:43:24.366006  630779 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1027 19:43:24.366071  630779 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1027 19:43:24.374833  630779 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1027 19:43:24.374851  630779 kubeadm.go:157] found existing configuration files:
	
	I1027 19:43:24.374900  630779 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1027 19:43:24.384062  630779 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1027 19:43:24.384115  630779 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1027 19:43:24.391972  630779 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1027 19:43:24.399932  630779 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1027 19:43:24.400003  630779 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1027 19:43:24.407817  630779 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1027 19:43:24.416219  630779 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1027 19:43:24.416289  630779 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1027 19:43:24.424921  630779 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1027 19:43:24.433294  630779 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1027 19:43:24.433363  630779 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1027 19:43:24.440923  630779 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1027 19:43:24.488625  630779 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1027 19:43:24.489602  630779 kubeadm.go:318] [preflight] Running pre-flight checks
	I1027 19:43:24.513519  630779 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1027 19:43:24.513660  630779 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1027 19:43:24.513734  630779 kubeadm.go:318] OS: Linux
	I1027 19:43:24.513811  630779 kubeadm.go:318] CGROUPS_CPU: enabled
	I1027 19:43:24.513877  630779 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1027 19:43:24.513944  630779 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1027 19:43:24.514035  630779 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1027 19:43:24.514119  630779 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1027 19:43:24.514219  630779 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1027 19:43:24.514296  630779 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1027 19:43:24.514358  630779 kubeadm.go:318] CGROUPS_IO: enabled
	I1027 19:43:24.578725  630779 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 19:43:24.578876  630779 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 19:43:24.579032  630779 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 19:43:24.586697  630779 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 19:43:24.589023  630779 out.go:252]   - Generating certificates and keys ...
	I1027 19:43:24.589103  630779 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1027 19:43:24.589211  630779 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1027 19:43:25.473644  630779 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 19:43:25.757796  630779 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1027 19:43:26.369204  630779 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1027 19:43:26.581906  630779 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1027 19:43:26.870740  630779 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1027 19:43:26.871075  630779 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [kindnet-387383 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1027 19:43:26.937178  630779 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1027 19:43:26.937370  630779 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [kindnet-387383 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1027 19:43:26.999332  630779 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 19:43:27.140747  630779 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 19:43:27.194455  630779 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1027 19:43:27.194562  630779 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 19:43:27.340769  630779 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 19:43:27.579588  630779 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 19:43:27.655006  630779 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 19:43:27.979679  630779 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 19:43:28.187021  630779 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 19:43:28.187775  630779 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 19:43:28.191794  630779 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 19:43:28.194236  630779 out.go:252]   - Booting up control plane ...
	I1027 19:43:28.194401  630779 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 19:43:28.194509  630779 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 19:43:28.194613  630779 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 19:43:28.213178  630779 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 19:43:28.213342  630779 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 19:43:28.220911  630779 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 19:43:28.221093  630779 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 19:43:28.221163  630779 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1027 19:43:28.337627  630779 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 19:43:28.337810  630779 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1027 19:43:24.680549  622136 node_ready.go:57] node "auto-387383" has "Ready":"False" status (will retry)
	W1027 19:43:27.178626  622136 node_ready.go:57] node "auto-387383" has "Ready":"False" status (will retry)
	I1027 19:43:24.137512  631152 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 19:43:33.245284  631152 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1027 19:43:33.245384  631152 kubeadm.go:318] [preflight] Running pre-flight checks
	I1027 19:43:33.245500  631152 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1027 19:43:33.245576  631152 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1027 19:43:33.245636  631152 kubeadm.go:318] OS: Linux
	I1027 19:43:33.245688  631152 kubeadm.go:318] CGROUPS_CPU: enabled
	I1027 19:43:33.245759  631152 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1027 19:43:33.245834  631152 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1027 19:43:33.245928  631152 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1027 19:43:33.246014  631152 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1027 19:43:33.246095  631152 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1027 19:43:33.246203  631152 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1027 19:43:33.246276  631152 kubeadm.go:318] CGROUPS_IO: enabled
	I1027 19:43:33.246378  631152 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1027 19:43:33.246497  631152 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1027 19:43:33.246607  631152 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1027 19:43:33.246697  631152 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1027 19:43:33.248145  631152 out.go:252]   - Generating certificates and keys ...
	I1027 19:43:33.248251  631152 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1027 19:43:33.248374  631152 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1027 19:43:33.248477  631152 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1027 19:43:33.248570  631152 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1027 19:43:33.248653  631152 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1027 19:43:33.248747  631152 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1027 19:43:33.248847  631152 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1027 19:43:33.249022  631152 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [calico-387383 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1027 19:43:33.249113  631152 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1027 19:43:33.249309  631152 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [calico-387383 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1027 19:43:33.249404  631152 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1027 19:43:33.249487  631152 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1027 19:43:33.249550  631152 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1027 19:43:33.249617  631152 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1027 19:43:33.249676  631152 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1027 19:43:33.249746  631152 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1027 19:43:33.249814  631152 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1027 19:43:33.249912  631152 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1027 19:43:33.249971  631152 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1027 19:43:33.250057  631152 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1027 19:43:33.250188  631152 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1027 19:43:33.252877  631152 out.go:252]   - Booting up control plane ...
	I1027 19:43:33.253014  631152 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1027 19:43:33.253146  631152 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1027 19:43:33.253243  631152 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1027 19:43:33.253381  631152 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1027 19:43:33.253507  631152 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1027 19:43:33.253680  631152 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1027 19:43:33.253789  631152 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1027 19:43:33.253824  631152 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1027 19:43:33.253927  631152 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1027 19:43:33.254039  631152 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1027 19:43:33.254117  631152 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001003431s
	I1027 19:43:33.254296  631152 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 19:43:33.254391  631152 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1027 19:43:33.254479  631152 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 19:43:33.254590  631152 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 19:43:33.254679  631152 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.906067512s
	I1027 19:43:33.254772  631152 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.802875251s
	I1027 19:43:33.254860  631152 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.00302778s
	I1027 19:43:33.255044  631152 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 19:43:33.255242  631152 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 19:43:33.255316  631152 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 19:43:33.255542  631152 kubeadm.go:318] [mark-control-plane] Marking the node calico-387383 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 19:43:33.255616  631152 kubeadm.go:318] [bootstrap-token] Using token: uf0sfr.1hmb9njll2ht9b28
	I1027 19:43:33.258025  631152 out.go:252]   - Configuring RBAC rules ...
	I1027 19:43:33.258199  631152 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 19:43:33.258344  631152 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 19:43:33.258479  631152 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 19:43:33.258603  631152 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 19:43:33.258724  631152 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 19:43:33.258833  631152 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 19:43:33.258966  631152 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 19:43:33.259037  631152 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1027 19:43:33.259106  631152 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1027 19:43:33.259116  631152 kubeadm.go:318] 
	I1027 19:43:33.259249  631152 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1027 19:43:33.259263  631152 kubeadm.go:318] 
	I1027 19:43:33.259333  631152 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1027 19:43:33.259341  631152 kubeadm.go:318] 
	I1027 19:43:33.259363  631152 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1027 19:43:33.259413  631152 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 19:43:33.259457  631152 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 19:43:33.259463  631152 kubeadm.go:318] 
	I1027 19:43:33.259529  631152 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1027 19:43:33.259537  631152 kubeadm.go:318] 
	I1027 19:43:33.259589  631152 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 19:43:33.259599  631152 kubeadm.go:318] 
	I1027 19:43:33.259654  631152 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1027 19:43:33.259722  631152 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 19:43:33.259781  631152 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 19:43:33.259793  631152 kubeadm.go:318] 
	I1027 19:43:33.259880  631152 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 19:43:33.259964  631152 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1027 19:43:33.259972  631152 kubeadm.go:318] 
	I1027 19:43:33.260065  631152 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token uf0sfr.1hmb9njll2ht9b28 \
	I1027 19:43:33.260233  631152 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ab29e81999671591f366788f5ae9ffb132789ebc71f7c0efdaecd38575a5ab6a \
	I1027 19:43:33.260269  631152 kubeadm.go:318] 	--control-plane 
	I1027 19:43:33.260278  631152 kubeadm.go:318] 
	I1027 19:43:33.260386  631152 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1027 19:43:33.260395  631152 kubeadm.go:318] 
	I1027 19:43:33.260496  631152 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token uf0sfr.1hmb9njll2ht9b28 \
	I1027 19:43:33.260655  631152 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ab29e81999671591f366788f5ae9ffb132789ebc71f7c0efdaecd38575a5ab6a 
	I1027 19:43:33.260673  631152 cni.go:84] Creating CNI manager for "calico"
	I1027 19:43:33.262573  631152 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I1027 19:43:29.338563  630779 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001046759s
	I1027 19:43:29.344620  630779 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1027 19:43:29.344779  630779 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1027 19:43:29.344916  630779 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1027 19:43:29.345049  630779 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1027 19:43:30.782547  630779 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.437855772s
	I1027 19:43:32.612043  630779 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 3.267336098s
	I1027 19:43:33.849120  630779 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.5043915s
	I1027 19:43:33.877834  630779 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1027 19:43:33.892099  630779 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1027 19:43:33.907259  630779 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1027 19:43:33.907522  630779 kubeadm.go:318] [mark-control-plane] Marking the node kindnet-387383 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1027 19:43:33.919683  630779 kubeadm.go:318] [bootstrap-token] Using token: 6mpkpu.9lbdr952x1s4u6wz
	W1027 19:43:29.178967  622136 node_ready.go:57] node "auto-387383" has "Ready":"False" status (will retry)
	W1027 19:43:31.678578  622136 node_ready.go:57] node "auto-387383" has "Ready":"False" status (will retry)
	W1027 19:43:33.678996  622136 node_ready.go:57] node "auto-387383" has "Ready":"False" status (will retry)
	I1027 19:43:33.265015  631152 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 19:43:33.265044  631152 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (539470 bytes)
	I1027 19:43:33.283930  631152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 19:43:33.922357  630779 out.go:252]   - Configuring RBAC rules ...
	I1027 19:43:33.922504  630779 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1027 19:43:33.928552  630779 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1027 19:43:33.936693  630779 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1027 19:43:33.940515  630779 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1027 19:43:33.944705  630779 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1027 19:43:33.949615  630779 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1027 19:43:34.257746  630779 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1027 19:43:34.679260  630779 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1027 19:43:35.260820  630779 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1027 19:43:35.261956  630779 kubeadm.go:318] 
	I1027 19:43:35.262053  630779 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1027 19:43:35.262069  630779 kubeadm.go:318] 
	I1027 19:43:35.262175  630779 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1027 19:43:35.262190  630779 kubeadm.go:318] 
	I1027 19:43:35.262218  630779 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1027 19:43:35.262295  630779 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1027 19:43:35.262356  630779 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1027 19:43:35.262365  630779 kubeadm.go:318] 
	I1027 19:43:35.262463  630779 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1027 19:43:35.262504  630779 kubeadm.go:318] 
	I1027 19:43:35.262561  630779 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1027 19:43:35.262569  630779 kubeadm.go:318] 
	I1027 19:43:35.262612  630779 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1027 19:43:35.262686  630779 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1027 19:43:35.262763  630779 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1027 19:43:35.262774  630779 kubeadm.go:318] 
	I1027 19:43:35.262891  630779 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1027 19:43:35.263002  630779 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1027 19:43:35.263009  630779 kubeadm.go:318] 
	I1027 19:43:35.263086  630779 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 6mpkpu.9lbdr952x1s4u6wz \
	I1027 19:43:35.263221  630779 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ab29e81999671591f366788f5ae9ffb132789ebc71f7c0efdaecd38575a5ab6a \
	I1027 19:43:35.263259  630779 kubeadm.go:318] 	--control-plane 
	I1027 19:43:35.263269  630779 kubeadm.go:318] 
	I1027 19:43:35.263397  630779 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1027 19:43:35.263414  630779 kubeadm.go:318] 
	I1027 19:43:35.263524  630779 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 6mpkpu.9lbdr952x1s4u6wz \
	I1027 19:43:35.263653  630779 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:ab29e81999671591f366788f5ae9ffb132789ebc71f7c0efdaecd38575a5ab6a 
	I1027 19:43:35.266574  630779 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1027 19:43:35.266689  630779 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1027 19:43:35.266713  630779 cni.go:84] Creating CNI manager for "kindnet"
	I1027 19:43:35.269408  630779 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1027 19:43:34.231872  631152 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 19:43:34.231993  631152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:34.232043  631152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-387383 minikube.k8s.io/updated_at=2025_10_27T19_43_34_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f minikube.k8s.io/name=calico-387383 minikube.k8s.io/primary=true
	I1027 19:43:34.243601  631152 ops.go:34] apiserver oom_adj: -16
	I1027 19:43:34.337620  631152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:34.838295  631152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:35.338385  631152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:35.838015  631152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:36.338693  631152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:36.838559  631152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:37.337916  631152 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:37.412803  631152 kubeadm.go:1113] duration metric: took 3.180884094s to wait for elevateKubeSystemPrivileges
	I1027 19:43:37.412843  631152 kubeadm.go:402] duration metric: took 13.558737181s to StartCluster
	I1027 19:43:37.412866  631152 settings.go:142] acquiring lock: {Name:mk8304c2106bf310642e0949fc0266ccb50f2f95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:43:37.412945  631152 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:43:37.414632  631152 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/kubeconfig: {Name:mk24cbe512a6907c874f3fb7a85390a8f9fd2b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:43:37.414933  631152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 19:43:37.414944  631152 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 19:43:37.415013  631152 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 19:43:37.415175  631152 addons.go:69] Setting storage-provisioner=true in profile "calico-387383"
	I1027 19:43:37.415193  631152 addons.go:238] Setting addon storage-provisioner=true in "calico-387383"
	I1027 19:43:37.415195  631152 config.go:182] Loaded profile config "calico-387383": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:43:37.415231  631152 host.go:66] Checking if "calico-387383" exists ...
	I1027 19:43:37.415233  631152 addons.go:69] Setting default-storageclass=true in profile "calico-387383"
	I1027 19:43:37.415293  631152 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-387383"
	I1027 19:43:37.415695  631152 cli_runner.go:164] Run: docker container inspect calico-387383 --format={{.State.Status}}
	I1027 19:43:37.415786  631152 cli_runner.go:164] Run: docker container inspect calico-387383 --format={{.State.Status}}
	I1027 19:43:37.416670  631152 out.go:179] * Verifying Kubernetes components...
	I1027 19:43:37.418151  631152 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:43:37.440184  631152 addons.go:238] Setting addon default-storageclass=true in "calico-387383"
	I1027 19:43:37.440240  631152 host.go:66] Checking if "calico-387383" exists ...
	I1027 19:43:37.440664  631152 cli_runner.go:164] Run: docker container inspect calico-387383 --format={{.State.Status}}
	I1027 19:43:37.442649  631152 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:43:37.443825  631152 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:43:37.443850  631152 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 19:43:37.443918  631152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-387383
	I1027 19:43:37.478607  631152 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 19:43:37.478635  631152 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 19:43:37.478701  631152 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-387383
	I1027 19:43:37.479090  631152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33485 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/calico-387383/id_rsa Username:docker}
	I1027 19:43:37.508427  631152 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33485 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/calico-387383/id_rsa Username:docker}
	I1027 19:43:37.520397  631152 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 19:43:37.571734  631152 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:43:37.608835  631152 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:43:37.627515  631152 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 19:43:37.718633  631152 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1027 19:43:37.719872  631152 node_ready.go:35] waiting up to 15m0s for node "calico-387383" to be "Ready" ...
	I1027 19:43:38.035661  631152 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1027 19:43:35.271209  630779 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1027 19:43:35.276008  630779 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1027 19:43:35.276033  630779 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1027 19:43:35.293423  630779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1027 19:43:35.550565  630779 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1027 19:43:35.550676  630779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:35.550694  630779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-387383 minikube.k8s.io/updated_at=2025_10_27T19_43_35_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f minikube.k8s.io/name=kindnet-387383 minikube.k8s.io/primary=true
	I1027 19:43:35.563929  630779 ops.go:34] apiserver oom_adj: -16
	I1027 19:43:35.672614  630779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:36.173261  630779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:36.673691  630779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:37.172754  630779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:37.673367  630779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:38.172750  630779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1027 19:43:35.679053  622136 node_ready.go:57] node "auto-387383" has "Ready":"False" status (will retry)
	W1027 19:43:37.679519  622136 node_ready.go:57] node "auto-387383" has "Ready":"False" status (will retry)
	I1027 19:43:38.037410  631152 addons.go:514] duration metric: took 622.396438ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1027 19:43:38.223497  631152 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-387383" context rescaled to 1 replicas
	I1027 19:43:38.673631  630779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:39.173124  630779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:39.673530  630779 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1027 19:43:39.784830  630779 kubeadm.go:1113] duration metric: took 4.234262961s to wait for elevateKubeSystemPrivileges
	I1027 19:43:39.784874  630779 kubeadm.go:402] duration metric: took 15.468179838s to StartCluster
	I1027 19:43:39.784899  630779 settings.go:142] acquiring lock: {Name:mk8304c2106bf310642e0949fc0266ccb50f2f95 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:43:39.784991  630779 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:43:39.787298  630779 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/kubeconfig: {Name:mk24cbe512a6907c874f3fb7a85390a8f9fd2b31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 19:43:39.787592  630779 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1027 19:43:39.787688  630779 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1027 19:43:39.788082  630779 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1027 19:43:39.788218  630779 addons.go:69] Setting storage-provisioner=true in profile "kindnet-387383"
	I1027 19:43:39.788232  630779 addons.go:69] Setting default-storageclass=true in profile "kindnet-387383"
	I1027 19:43:39.788247  630779 addons.go:238] Setting addon storage-provisioner=true in "kindnet-387383"
	I1027 19:43:39.788257  630779 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-387383"
	I1027 19:43:39.788281  630779 host.go:66] Checking if "kindnet-387383" exists ...
	I1027 19:43:39.788671  630779 cli_runner.go:164] Run: docker container inspect kindnet-387383 --format={{.State.Status}}
	I1027 19:43:39.788838  630779 cli_runner.go:164] Run: docker container inspect kindnet-387383 --format={{.State.Status}}
	I1027 19:43:39.788837  630779 config.go:182] Loaded profile config "kindnet-387383": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:43:39.790950  630779 out.go:179] * Verifying Kubernetes components...
	I1027 19:43:39.792802  630779 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1027 19:43:39.826354  630779 addons.go:238] Setting addon default-storageclass=true in "kindnet-387383"
	I1027 19:43:39.826409  630779 host.go:66] Checking if "kindnet-387383" exists ...
	I1027 19:43:39.827908  630779 cli_runner.go:164] Run: docker container inspect kindnet-387383 --format={{.State.Status}}
	I1027 19:43:39.830542  630779 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1027 19:43:39.832059  630779 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:43:39.832083  630779 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1027 19:43:39.832168  630779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-387383
	I1027 19:43:39.863885  630779 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1027 19:43:39.863915  630779 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1027 19:43:39.863998  630779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-387383
	I1027 19:43:39.869463  630779 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33480 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/kindnet-387383/id_rsa Username:docker}
	I1027 19:43:39.900414  630779 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33480 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/kindnet-387383/id_rsa Username:docker}
	I1027 19:43:39.961804  630779 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1027 19:43:40.006450  630779 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1027 19:43:40.028342  630779 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1027 19:43:40.067233  630779 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1027 19:43:40.258394  630779 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1027 19:43:40.259277  630779 node_ready.go:35] waiting up to 15m0s for node "kindnet-387383" to be "Ready" ...
	I1027 19:43:40.498657  630779 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	
	
	==> CRI-O <==
	Oct 27 19:43:07 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:07.971127992Z" level=info msg="Started container" PID=1729 containerID=018a51229d9e57577826b454b250179e5170284fbbee8eaf8f73bb7ff0106c40 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fdv5r/dashboard-metrics-scraper id=bd0921e0-7182-4c78-b263-8eca15ad155a name=/runtime.v1.RuntimeService/StartContainer sandboxID=514a13049f5ff5ffa0892d6612cd174e20cc3678e3f1016c0cc5d59ac1dc3286
	Oct 27 19:43:08 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:08.949600664Z" level=info msg="Removing container: 52d48213a1788841f147b8597cc6595fef278936c1b92a83552ce357ab8ee3f4" id=fc3e1aed-1b7e-4175-b4e2-c556ccfc43bb name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 19:43:08 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:08.966218534Z" level=info msg="Removed container 52d48213a1788841f147b8597cc6595fef278936c1b92a83552ce357ab8ee3f4: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fdv5r/dashboard-metrics-scraper" id=fc3e1aed-1b7e-4175-b4e2-c556ccfc43bb name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 19:43:13 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:13.965775267Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=672e31d6-9d09-4493-8e6f-f904eac4e109 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:43:14 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:14.065207482Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=9fbb53f1-8021-4830-b369-8ba4ffaa64f5 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:43:14 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:14.088089324Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=e7a96667-6a93-4975-b053-823e81725da0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:43:14 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:14.088380622Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:43:14 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:14.157554358Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:43:14 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:14.157765088Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/2a50dce02cb9017b495d8fb58b39702bffedcddee2948ee821370d657b7f7f40/merged/etc/passwd: no such file or directory"
	Oct 27 19:43:14 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:14.15778944Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/2a50dce02cb9017b495d8fb58b39702bffedcddee2948ee821370d657b7f7f40/merged/etc/group: no such file or directory"
	Oct 27 19:43:14 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:14.1580072Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:43:14 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:14.430361046Z" level=info msg="Created container aac23a7766ba54465e8372369b0736fdbf5d9242a8ef9f2ac26eedc0aad943f4: kube-system/storage-provisioner/storage-provisioner" id=e7a96667-6a93-4975-b053-823e81725da0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:43:14 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:14.431113694Z" level=info msg="Starting container: aac23a7766ba54465e8372369b0736fdbf5d9242a8ef9f2ac26eedc0aad943f4" id=6024198a-6a69-4735-97fe-c12fa2fa176b name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:43:14 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:14.433213273Z" level=info msg="Started container" PID=1743 containerID=aac23a7766ba54465e8372369b0736fdbf5d9242a8ef9f2ac26eedc0aad943f4 description=kube-system/storage-provisioner/storage-provisioner id=6024198a-6a69-4735-97fe-c12fa2fa176b name=/runtime.v1.RuntimeService/StartContainer sandboxID=a476b9e052022bfa9964afb950b20b1947301431f5ac7c469a956e9b9ed56237
	Oct 27 19:43:28 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:28.819968208Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e5245ccf-255b-4f6d-a1e5-58b535da5ff3 name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:43:28 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:28.821293747Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4bc8181b-6fad-462d-a5af-3dcfba7b3c2a name=/runtime.v1.ImageService/ImageStatus
	Oct 27 19:43:28 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:28.822948039Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fdv5r/dashboard-metrics-scraper" id=1d0be550-1a41-4615-9f01-4b2747919133 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:43:28 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:28.823157388Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:43:28 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:28.830848877Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:43:28 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:28.83161098Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 27 19:43:28 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:28.862448131Z" level=info msg="Created container 73ec8a85e99a5706793ba06e7c17f5889883af7a6fba00f94e2367ec548fda2f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fdv5r/dashboard-metrics-scraper" id=1d0be550-1a41-4615-9f01-4b2747919133 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 27 19:43:28 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:28.866852252Z" level=info msg="Starting container: 73ec8a85e99a5706793ba06e7c17f5889883af7a6fba00f94e2367ec548fda2f" id=6322f436-089a-4cea-9239-6e42e9d8247c name=/runtime.v1.RuntimeService/StartContainer
	Oct 27 19:43:28 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:28.869444761Z" level=info msg="Started container" PID=1779 containerID=73ec8a85e99a5706793ba06e7c17f5889883af7a6fba00f94e2367ec548fda2f description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fdv5r/dashboard-metrics-scraper id=6322f436-089a-4cea-9239-6e42e9d8247c name=/runtime.v1.RuntimeService/StartContainer sandboxID=514a13049f5ff5ffa0892d6612cd174e20cc3678e3f1016c0cc5d59ac1dc3286
	Oct 27 19:43:29 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:29.012695253Z" level=info msg="Removing container: 018a51229d9e57577826b454b250179e5170284fbbee8eaf8f73bb7ff0106c40" id=d598aa05-9bf5-4df1-8096-58eb15ad82ca name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 27 19:43:29 default-k8s-diff-port-813397 crio[567]: time="2025-10-27T19:43:29.027709889Z" level=info msg="Removed container 018a51229d9e57577826b454b250179e5170284fbbee8eaf8f73bb7ff0106c40: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fdv5r/dashboard-metrics-scraper" id=d598aa05-9bf5-4df1-8096-58eb15ad82ca name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	73ec8a85e99a5       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago       Exited              dashboard-metrics-scraper   3                   514a13049f5ff       dashboard-metrics-scraper-6ffb444bf9-fdv5r             kubernetes-dashboard
	aac23a7766ba5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           28 seconds ago       Running             storage-provisioner         1                   a476b9e052022       storage-provisioner                                    kube-system
	e3cb093a1aa0f       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   49 seconds ago       Running             kubernetes-dashboard        0                   cdf3011352a38       kubernetes-dashboard-855c9754f9-gllsf                  kubernetes-dashboard
	6352f76b57f5e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           59 seconds ago       Running             coredns                     0                   32aa4814e3ccc       coredns-66bc5c9577-d2trp                               kube-system
	3e05d7811de2a       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           59 seconds ago       Running             busybox                     1                   46ac295a5c29c       busybox                                                default
	7c615af71a132       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           59 seconds ago       Running             kindnet-cni                 0                   3f74dbff6e12b       kindnet-hhddd                                          kube-system
	a99b69df12664       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           59 seconds ago       Exited              storage-provisioner         0                   a476b9e052022       storage-provisioner                                    kube-system
	2ad23fa6ba066       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           59 seconds ago       Running             kube-proxy                  0                   a536126784f99       kube-proxy-bldc8                                       kube-system
	d6d42a7474478       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           About a minute ago   Running             kube-controller-manager     0                   4a489daa30ff0       kube-controller-manager-default-k8s-diff-port-813397   kube-system
	0ef2559af1f10       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           About a minute ago   Running             etcd                        0                   44420c1add8b1       etcd-default-k8s-diff-port-813397                      kube-system
	9780797653aab       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           About a minute ago   Running             kube-scheduler              0                   1e79fca034135       kube-scheduler-default-k8s-diff-port-813397            kube-system
	71bc91522e0a3       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           About a minute ago   Running             kube-apiserver              0                   eca65d870d4f4       kube-apiserver-default-k8s-diff-port-813397            kube-system
	
	
	==> coredns [6352f76b57f5e0e0deff0e7dcd3aff94c185f37edfe63b6b2f233017bcc7468d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38418 - 38297 "HINFO IN 922907106206104028.5101411383343467804. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.12706401s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-813397
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-813397
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6a09c31caecd3126cc8337ca95e52380a56f5a0f
	                    minikube.k8s.io/name=default-k8s-diff-port-813397
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_27T19_41_45_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 27 Oct 2025 19:41:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-813397
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 27 Oct 2025 19:43:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 27 Oct 2025 19:43:33 +0000   Mon, 27 Oct 2025 19:41:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 27 Oct 2025 19:43:33 +0000   Mon, 27 Oct 2025 19:41:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 27 Oct 2025 19:43:33 +0000   Mon, 27 Oct 2025 19:41:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 27 Oct 2025 19:43:33 +0000   Mon, 27 Oct 2025 19:42:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-813397
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                7fbc9f19-9330-4688-94ac-b272ce8c2683
	  Boot ID:                    811bd29c-e64e-4acc-9427-bab1f7caed93
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 coredns-66bc5c9577-d2trp                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     113s
	  kube-system                 etcd-default-k8s-diff-port-813397                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         118s
	  kube-system                 kindnet-hhddd                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      113s
	  kube-system                 kube-apiserver-default-k8s-diff-port-813397             250m (3%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-813397    200m (2%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-bldc8                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-default-k8s-diff-port-813397             100m (1%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-fdv5r              0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-gllsf                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 112s               kube-proxy       
	  Normal  Starting                 58s                kube-proxy       
	  Normal  NodeHasSufficientMemory  118s               kubelet          Node default-k8s-diff-port-813397 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s               kubelet          Node default-k8s-diff-port-813397 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s               kubelet          Node default-k8s-diff-port-813397 status is now: NodeHasSufficientPID
	  Normal  Starting                 118s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           114s               node-controller  Node default-k8s-diff-port-813397 event: Registered Node default-k8s-diff-port-813397 in Controller
	  Normal  NodeReady                102s               kubelet          Node default-k8s-diff-port-813397 status is now: NodeReady
	  Normal  Starting                 63s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  63s (x8 over 63s)  kubelet          Node default-k8s-diff-port-813397 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s (x8 over 63s)  kubelet          Node default-k8s-diff-port-813397 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s (x8 over 63s)  kubelet          Node default-k8s-diff-port-813397 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           57s                node-controller  Node default-k8s-diff-port-813397 event: Registered Node default-k8s-diff-port-813397 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5a 23 52 43 9a ba 08 06
	[  +0.000398] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 12 50 95 0e df 53 08 06
	[Oct27 18:59] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.017295] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023893] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023882] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023891] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +1.023851] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +2.047849] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +4.031592] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[  +8.319143] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[ +16.382183] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000010] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	[Oct27 19:00] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 36 b1 68 5e fb 11 ee 41 a8 24 98 ef 08 00
	
	
	==> etcd [0ef2559af1f1081ff5b055e5ba9d447a5c678b0a1ce12c6cb5f29cf71d5078e4] <==
	{"level":"warn","ts":"2025-10-27T19:42:41.628353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:41.636064Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:41.645027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:41.660535Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:41.667589Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:41.675024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:41.681989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:41.688733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:41.696831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:41.706111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:41.714024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:41.721793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:41.742180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:41.750566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54304","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:41.757870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54324","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-27T19:42:41.804655Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54360","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-27T19:42:46.979779Z","caller":"traceutil/trace.go:172","msg":"trace[2055433803] transaction","detail":"{read_only:false; response_revision:561; number_of_response:1; }","duration":"116.16129ms","start":"2025-10-27T19:42:46.863598Z","end":"2025-10-27T19:42:46.979759Z","steps":["trace[2055433803] 'process raft request'  (duration: 116.054987ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T19:43:12.944243Z","caller":"traceutil/trace.go:172","msg":"trace[743172718] linearizableReadLoop","detail":"{readStateIndex:641; appliedIndex:641; }","duration":"204.434432ms","start":"2025-10-27T19:43:12.739784Z","end":"2025-10-27T19:43:12.944219Z","steps":["trace[743172718] 'read index received'  (duration: 204.425267ms)","trace[743172718] 'applied index is now lower than readState.Index'  (duration: 7.825µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-27T19:43:12.944548Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"204.736879ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-27T19:43:12.944632Z","caller":"traceutil/trace.go:172","msg":"trace[2117002860] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:609; }","duration":"204.844651ms","start":"2025-10-27T19:43:12.739777Z","end":"2025-10-27T19:43:12.944621Z","steps":["trace[2117002860] 'agreement among raft nodes before linearized reading'  (duration: 204.699596ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T19:43:12.945300Z","caller":"traceutil/trace.go:172","msg":"trace[959555847] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"208.438594ms","start":"2025-10-27T19:43:12.736844Z","end":"2025-10-27T19:43:12.945283Z","steps":["trace[959555847] 'process raft request'  (duration: 208.10561ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-27T19:43:13.457977Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"190.948687ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722596663691127125 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-813397\" mod_revision:602 > success:<request_put:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-813397\" value_size:531 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/default-k8s-diff-port-813397\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-27T19:43:13.458183Z","caller":"traceutil/trace.go:172","msg":"trace[248761841] transaction","detail":"{read_only:false; response_revision:612; number_of_response:1; }","duration":"260.792104ms","start":"2025-10-27T19:43:13.197373Z","end":"2025-10-27T19:43:13.458165Z","steps":["trace[248761841] 'process raft request'  (duration: 68.919787ms)","trace[248761841] 'compare'  (duration: 190.811584ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-27T19:43:14.086917Z","caller":"traceutil/trace.go:172","msg":"trace[311934223] transaction","detail":"{read_only:false; response_revision:614; number_of_response:1; }","duration":"117.662748ms","start":"2025-10-27T19:43:13.969232Z","end":"2025-10-27T19:43:14.086895Z","steps":["trace[311934223] 'process raft request'  (duration: 117.523806ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-27T19:43:14.130074Z","caller":"traceutil/trace.go:172","msg":"trace[408081185] transaction","detail":"{read_only:false; response_revision:615; number_of_response:1; }","duration":"157.191937ms","start":"2025-10-27T19:43:13.972865Z","end":"2025-10-27T19:43:14.130057Z","steps":["trace[408081185] 'process raft request'  (duration: 157.024284ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:43:42 up  2:26,  0 user,  load average: 5.95, 4.26, 2.61
	Linux default-k8s-diff-port-813397 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7c615af71a1328ed761f08f1b576963f0b4af669a2d38d4c04dcbc67befffac1] <==
	I1027 19:42:43.454415       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1027 19:42:43.454692       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1027 19:42:43.454882       1 main.go:148] setting mtu 1500 for CNI 
	I1027 19:42:43.454902       1 main.go:178] kindnetd IP family: "ipv4"
	I1027 19:42:43.454928       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-27T19:42:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1027 19:42:43.658211       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1027 19:42:43.658305       1 controller.go:381] "Waiting for informer caches to sync"
	I1027 19:42:43.658318       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1027 19:42:43.659560       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1027 19:42:44.052672       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1027 19:42:44.052708       1 metrics.go:72] Registering metrics
	I1027 19:42:44.052803       1 controller.go:711] "Syncing nftables rules"
	I1027 19:42:53.658242       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 19:42:53.658307       1 main.go:301] handling current node
	I1027 19:43:03.658640       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 19:43:03.658694       1 main.go:301] handling current node
	I1027 19:43:13.658383       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 19:43:13.658444       1 main.go:301] handling current node
	I1027 19:43:23.658609       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 19:43:23.658648       1 main.go:301] handling current node
	I1027 19:43:33.658936       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1027 19:43:33.658977       1 main.go:301] handling current node
	
	
	==> kube-apiserver [71bc91522e0a38092dcf74ebe27051d01aa77c65b02d1f845740c5a57c74c29b] <==
	I1027 19:42:42.312399       1 aggregator.go:171] initial CRD sync complete...
	I1027 19:42:42.312410       1 autoregister_controller.go:144] Starting autoregister controller
	I1027 19:42:42.312417       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1027 19:42:42.312424       1 cache.go:39] Caches are synced for autoregister controller
	I1027 19:42:42.311374       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1027 19:42:42.316185       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1027 19:42:42.319374       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1027 19:42:42.326779       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1027 19:42:42.326864       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1027 19:42:42.327958       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1027 19:42:42.327987       1 policy_source.go:240] refreshing policies
	I1027 19:42:42.357761       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1027 19:42:42.607753       1 controller.go:667] quota admission added evaluator for: namespaces
	I1027 19:42:42.650120       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1027 19:42:42.677574       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1027 19:42:42.685127       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1027 19:42:42.693093       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1027 19:42:42.738420       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.13.176"}
	I1027 19:42:42.753877       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.67.124"}
	I1027 19:42:43.215644       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1027 19:42:46.115658       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 19:42:46.115711       1 controller.go:667] quota admission added evaluator for: endpoints
	I1027 19:42:46.167387       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1027 19:42:46.215349       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1027 19:42:46.215349       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [d6d42a747447887cf7cfddbb910c2d92aff06ed6741847fd2f5efa19ba0e6533] <==
	I1027 19:42:45.623985       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1027 19:42:45.626893       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1027 19:42:45.631230       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1027 19:42:45.632462       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1027 19:42:45.635705       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1027 19:42:45.661294       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1027 19:42:45.661323       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1027 19:42:45.661331       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1027 19:42:45.661382       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1027 19:42:45.661385       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1027 19:42:45.661448       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1027 19:42:45.661727       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1027 19:42:45.662463       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1027 19:42:45.668268       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1027 19:42:45.668346       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1027 19:42:45.669424       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1027 19:42:45.670641       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1027 19:42:45.671894       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1027 19:42:45.676290       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1027 19:42:45.679006       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1027 19:42:45.681465       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1027 19:42:45.683844       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1027 19:42:45.688212       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1027 19:42:45.688308       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1027 19:42:45.691726       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	
	
	==> kube-proxy [2ad23fa6ba06688254490ad382551b5850d3c01b455056ac3570cd76e67f3b13] <==
	I1027 19:42:43.242591       1 server_linux.go:53] "Using iptables proxy"
	I1027 19:42:43.355459       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1027 19:42:43.456422       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1027 19:42:43.456469       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1027 19:42:43.456569       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1027 19:42:43.474954       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1027 19:42:43.475025       1 server_linux.go:132] "Using iptables Proxier"
	I1027 19:42:43.480222       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1027 19:42:43.480642       1 server.go:527] "Version info" version="v1.34.1"
	I1027 19:42:43.480671       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:42:43.482162       1 config.go:200] "Starting service config controller"
	I1027 19:42:43.482189       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1027 19:42:43.482225       1 config.go:106] "Starting endpoint slice config controller"
	I1027 19:42:43.482233       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1027 19:42:43.482265       1 config.go:403] "Starting serviceCIDR config controller"
	I1027 19:42:43.482290       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1027 19:42:43.482371       1 config.go:309] "Starting node config controller"
	I1027 19:42:43.482391       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1027 19:42:43.582385       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1027 19:42:43.582387       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1027 19:42:43.582408       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1027 19:42:43.582498       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [9780797653aab1b99e5b8a7975532cff7b3a72af97330b8012e4e50b4dadbfde] <==
	I1027 19:42:42.251472       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1027 19:42:42.251637       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1027 19:42:42.254629       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:42:42.254678       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1027 19:42:42.255059       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1027 19:42:42.255225       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1027 19:42:42.258935       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1027 19:42:42.260891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1027 19:42:42.261019       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1027 19:42:42.261088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1027 19:42:42.265851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1027 19:42:42.266233       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1027 19:42:42.266347       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1027 19:42:42.266417       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1027 19:42:42.266478       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1027 19:42:42.266556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1027 19:42:42.267328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1027 19:42:42.268322       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1027 19:42:42.271243       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1027 19:42:42.271517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1027 19:42:42.271639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1027 19:42:42.272029       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1027 19:42:42.272186       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1027 19:42:42.272356       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1027 19:42:43.455496       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 27 19:42:46 default-k8s-diff-port-813397 kubelet[726]: I1027 19:42:46.379150     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/460f77f5-a4eb-4992-a7b0-1413ca2d33c1-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-gllsf\" (UID: \"460f77f5-a4eb-4992-a7b0-1413ca2d33c1\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gllsf"
	Oct 27 19:42:46 default-k8s-diff-port-813397 kubelet[726]: I1027 19:42:46.379252     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccnjg\" (UniqueName: \"kubernetes.io/projected/460f77f5-a4eb-4992-a7b0-1413ca2d33c1-kube-api-access-ccnjg\") pod \"kubernetes-dashboard-855c9754f9-gllsf\" (UID: \"460f77f5-a4eb-4992-a7b0-1413ca2d33c1\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gllsf"
	Oct 27 19:42:52 default-k8s-diff-port-813397 kubelet[726]: I1027 19:42:52.360500     726 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 27 19:42:55 default-k8s-diff-port-813397 kubelet[726]: I1027 19:42:55.760841     726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-gllsf" podStartSLOduration=3.678249895 podStartE2EDuration="9.760813538s" podCreationTimestamp="2025-10-27 19:42:46 +0000 UTC" firstStartedPulling="2025-10-27 19:42:46.861109268 +0000 UTC m=+7.141001258" lastFinishedPulling="2025-10-27 19:42:52.943672906 +0000 UTC m=+13.223564901" observedRunningTime="2025-10-27 19:42:53.901459229 +0000 UTC m=+14.181351232" watchObservedRunningTime="2025-10-27 19:42:55.760813538 +0000 UTC m=+16.040705541"
	Oct 27 19:42:56 default-k8s-diff-port-813397 kubelet[726]: I1027 19:42:56.902303     726 scope.go:117] "RemoveContainer" containerID="8c2b6060feb1135b54f6456af74c20816936e9cf5ea1ffe21c88e1f46d1af198"
	Oct 27 19:42:57 default-k8s-diff-port-813397 kubelet[726]: I1027 19:42:57.907071     726 scope.go:117] "RemoveContainer" containerID="8c2b6060feb1135b54f6456af74c20816936e9cf5ea1ffe21c88e1f46d1af198"
	Oct 27 19:42:57 default-k8s-diff-port-813397 kubelet[726]: I1027 19:42:57.907235     726 scope.go:117] "RemoveContainer" containerID="52d48213a1788841f147b8597cc6595fef278936c1b92a83552ce357ab8ee3f4"
	Oct 27 19:42:57 default-k8s-diff-port-813397 kubelet[726]: E1027 19:42:57.907416     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fdv5r_kubernetes-dashboard(48945846-3a22-4b08-ac60-4568409f1c83)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fdv5r" podUID="48945846-3a22-4b08-ac60-4568409f1c83"
	Oct 27 19:42:58 default-k8s-diff-port-813397 kubelet[726]: I1027 19:42:58.912885     726 scope.go:117] "RemoveContainer" containerID="52d48213a1788841f147b8597cc6595fef278936c1b92a83552ce357ab8ee3f4"
	Oct 27 19:42:58 default-k8s-diff-port-813397 kubelet[726]: E1027 19:42:58.913084     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fdv5r_kubernetes-dashboard(48945846-3a22-4b08-ac60-4568409f1c83)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fdv5r" podUID="48945846-3a22-4b08-ac60-4568409f1c83"
	Oct 27 19:43:07 default-k8s-diff-port-813397 kubelet[726]: I1027 19:43:07.902554     726 scope.go:117] "RemoveContainer" containerID="52d48213a1788841f147b8597cc6595fef278936c1b92a83552ce357ab8ee3f4"
	Oct 27 19:43:08 default-k8s-diff-port-813397 kubelet[726]: I1027 19:43:08.945698     726 scope.go:117] "RemoveContainer" containerID="52d48213a1788841f147b8597cc6595fef278936c1b92a83552ce357ab8ee3f4"
	Oct 27 19:43:08 default-k8s-diff-port-813397 kubelet[726]: I1027 19:43:08.945985     726 scope.go:117] "RemoveContainer" containerID="018a51229d9e57577826b454b250179e5170284fbbee8eaf8f73bb7ff0106c40"
	Oct 27 19:43:08 default-k8s-diff-port-813397 kubelet[726]: E1027 19:43:08.946205     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fdv5r_kubernetes-dashboard(48945846-3a22-4b08-ac60-4568409f1c83)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fdv5r" podUID="48945846-3a22-4b08-ac60-4568409f1c83"
	Oct 27 19:43:13 default-k8s-diff-port-813397 kubelet[726]: I1027 19:43:13.965323     726 scope.go:117] "RemoveContainer" containerID="a99b69df126644d4ba34b740a14a250d74ff8e1c6a80b438411dfe1669fada08"
	Oct 27 19:43:17 default-k8s-diff-port-813397 kubelet[726]: I1027 19:43:17.902047     726 scope.go:117] "RemoveContainer" containerID="018a51229d9e57577826b454b250179e5170284fbbee8eaf8f73bb7ff0106c40"
	Oct 27 19:43:17 default-k8s-diff-port-813397 kubelet[726]: E1027 19:43:17.902354     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fdv5r_kubernetes-dashboard(48945846-3a22-4b08-ac60-4568409f1c83)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fdv5r" podUID="48945846-3a22-4b08-ac60-4568409f1c83"
	Oct 27 19:43:28 default-k8s-diff-port-813397 kubelet[726]: I1027 19:43:28.819329     726 scope.go:117] "RemoveContainer" containerID="018a51229d9e57577826b454b250179e5170284fbbee8eaf8f73bb7ff0106c40"
	Oct 27 19:43:29 default-k8s-diff-port-813397 kubelet[726]: I1027 19:43:29.011231     726 scope.go:117] "RemoveContainer" containerID="018a51229d9e57577826b454b250179e5170284fbbee8eaf8f73bb7ff0106c40"
	Oct 27 19:43:29 default-k8s-diff-port-813397 kubelet[726]: I1027 19:43:29.011492     726 scope.go:117] "RemoveContainer" containerID="73ec8a85e99a5706793ba06e7c17f5889883af7a6fba00f94e2367ec548fda2f"
	Oct 27 19:43:29 default-k8s-diff-port-813397 kubelet[726]: E1027 19:43:29.011850     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fdv5r_kubernetes-dashboard(48945846-3a22-4b08-ac60-4568409f1c83)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fdv5r" podUID="48945846-3a22-4b08-ac60-4568409f1c83"
	Oct 27 19:43:36 default-k8s-diff-port-813397 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 27 19:43:36 default-k8s-diff-port-813397 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 27 19:43:36 default-k8s-diff-port-813397 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 27 19:43:36 default-k8s-diff-port-813397 systemd[1]: kubelet.service: Consumed 2.017s CPU time.
	
	
	==> kubernetes-dashboard [e3cb093a1aa0f1c554cd5ee66a4a34809e2ef72e9a8a48c1a6c6e48763472af4] <==
	2025/10/27 19:42:53 Starting overwatch
	2025/10/27 19:42:53 Using namespace: kubernetes-dashboard
	2025/10/27 19:42:53 Using in-cluster config to connect to apiserver
	2025/10/27 19:42:53 Using secret token for csrf signing
	2025/10/27 19:42:53 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/27 19:42:53 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/27 19:42:53 Successful initial request to the apiserver, version: v1.34.1
	2025/10/27 19:42:53 Generating JWE encryption key
	2025/10/27 19:42:53 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/27 19:42:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/27 19:42:53 Initializing JWE encryption key from synchronized object
	2025/10/27 19:42:53 Creating in-cluster Sidecar client
	2025/10/27 19:42:53 Serving insecurely on HTTP port: 9090
	2025/10/27 19:42:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/27 19:43:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [a99b69df126644d4ba34b740a14a250d74ff8e1c6a80b438411dfe1669fada08] <==
	I1027 19:42:43.211375       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1027 19:43:13.213601       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [aac23a7766ba54465e8372369b0736fdbf5d9242a8ef9f2ac26eedc0aad943f4] <==
	I1027 19:43:14.452682       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1027 19:43:14.452721       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1027 19:43:14.454940       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:43:17.909926       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:43:22.171061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:43:25.769386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:43:28.824639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:43:31.848315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:43:31.854450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 19:43:31.854685       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1027 19:43:31.854906       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-813397_7253d2a6-9e8d-4078-9636-f5a8ce6ed6af!
	I1027 19:43:31.856222       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cbed91f6-01d4-484d-a71d-80aad634d779", APIVersion:"v1", ResourceVersion:"635", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-813397_7253d2a6-9e8d-4078-9636-f5a8ce6ed6af became leader
	W1027 19:43:31.859904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:43:31.874147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1027 19:43:31.955726       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-813397_7253d2a6-9e8d-4078-9636-f5a8ce6ed6af!
	W1027 19:43:33.881929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:43:33.887779       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:43:35.892348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:43:35.899556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:43:37.903288       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:43:37.908615       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:43:39.917605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:43:39.929335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:43:41.932739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1027 19:43:41.943407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-813397 -n default-k8s-diff-port-813397
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-813397 -n default-k8s-diff-port-813397: exit status 2 (357.082498ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-813397 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (7.64s)
E1027 19:45:25.540489  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/old-k8s-version-468959/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (263/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.12
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.24
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.16
12 TestDownloadOnly/v1.34.1/json-events 3.52
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.08
18 TestDownloadOnly/v1.34.1/DeleteAll 0.25
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.16
20 TestDownloadOnlyKic 0.46
21 TestBinaryMirror 0.87
22 TestOffline 57.5
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 152.56
31 TestAddons/serial/GCPAuth/Namespaces 0.12
32 TestAddons/serial/GCPAuth/FakeCredentials 8.44
48 TestAddons/StoppedEnableDisable 16.75
49 TestCertOptions 34.07
50 TestCertExpiration 217.3
52 TestForceSystemdFlag 40.89
53 TestForceSystemdEnv 34.21
58 TestErrorSpam/setup 21.39
59 TestErrorSpam/start 0.72
60 TestErrorSpam/status 1.03
61 TestErrorSpam/pause 6.95
62 TestErrorSpam/unpause 5.65
63 TestErrorSpam/stop 2.72
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 37.55
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.46
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.06
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.11
75 TestFunctional/serial/CacheCmd/cache/add_local 1.22
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
77 TestFunctional/serial/CacheCmd/cache/list 0.07
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.72
80 TestFunctional/serial/CacheCmd/cache/delete 0.13
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 42.13
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.34
86 TestFunctional/serial/LogsFileCmd 1.35
87 TestFunctional/serial/InvalidService 3.85
89 TestFunctional/parallel/ConfigCmd 0.47
90 TestFunctional/parallel/DashboardCmd 8.35
91 TestFunctional/parallel/DryRun 0.43
92 TestFunctional/parallel/InternationalLanguage 0.19
93 TestFunctional/parallel/StatusCmd 1.32
98 TestFunctional/parallel/AddonsCmd 0.17
99 TestFunctional/parallel/PersistentVolumeClaim 27.56
101 TestFunctional/parallel/SSHCmd 0.56
102 TestFunctional/parallel/CpCmd 1.88
103 TestFunctional/parallel/MySQL 17.78
104 TestFunctional/parallel/FileSync 0.32
105 TestFunctional/parallel/CertSync 1.94
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.7
113 TestFunctional/parallel/License 0.47
115 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
116 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
117 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
118 TestFunctional/parallel/ProfileCmd/profile_not_create 0.63
119 TestFunctional/parallel/MountCmd/any-port 9.38
120 TestFunctional/parallel/ProfileCmd/profile_list 0.51
121 TestFunctional/parallel/ProfileCmd/profile_json_output 0.51
122 TestFunctional/parallel/Version/short 0.07
123 TestFunctional/parallel/Version/components 0.52
124 TestFunctional/parallel/MountCmd/specific-port 1.89
125 TestFunctional/parallel/MountCmd/VerifyCleanup 1.8
127 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.53
128 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
130 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.21
131 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
132 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
136 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
141 TestFunctional/parallel/ImageCommands/ImageBuild 2.94
142 TestFunctional/parallel/ImageCommands/Setup 1.29
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
150 TestFunctional/parallel/ServiceCmd/List 1.72
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.71
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 123.26
163 TestMultiControlPlane/serial/DeployApp 5.56
164 TestMultiControlPlane/serial/PingHostFromPods 1.14
165 TestMultiControlPlane/serial/AddWorkerNode 27.68
166 TestMultiControlPlane/serial/NodeLabels 0.07
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.95
168 TestMultiControlPlane/serial/CopyFile 18.76
169 TestMultiControlPlane/serial/StopSecondaryNode 19.96
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.77
171 TestMultiControlPlane/serial/RestartSecondaryNode 9.25
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.97
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 107.73
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.73
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.75
176 TestMultiControlPlane/serial/StopCluster 43.27
177 TestMultiControlPlane/serial/RestartCluster 52.48
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.74
179 TestMultiControlPlane/serial/AddSecondaryNode 44.2
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.96
185 TestJSONOutput/start/Command 40.15
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 6.12
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.26
210 TestKicCustomNetwork/create_custom_network 27.9
211 TestKicCustomNetwork/use_default_bridge_network 25.49
212 TestKicExistingNetwork 24.11
213 TestKicCustomSubnet 25.12
214 TestKicStaticIP 26.53
215 TestMainNoArgs 0.07
216 TestMinikubeProfile 51.89
219 TestMountStart/serial/StartWithMountFirst 5.67
220 TestMountStart/serial/VerifyMountFirst 0.29
221 TestMountStart/serial/StartWithMountSecond 5.43
222 TestMountStart/serial/VerifyMountSecond 0.29
223 TestMountStart/serial/DeleteFirst 1.76
224 TestMountStart/serial/VerifyMountPostDelete 0.29
225 TestMountStart/serial/Stop 1.27
226 TestMountStart/serial/RestartStopped 7.51
227 TestMountStart/serial/VerifyMountPostStop 0.29
230 TestMultiNode/serial/FreshStart2Nodes 62.76
231 TestMultiNode/serial/DeployApp2Nodes 3.68
232 TestMultiNode/serial/PingHostFrom2Pods 0.77
233 TestMultiNode/serial/AddNode 26.95
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.7
236 TestMultiNode/serial/CopyFile 10.47
237 TestMultiNode/serial/StopNode 2.34
238 TestMultiNode/serial/StartAfterStop 7.5
239 TestMultiNode/serial/RestartKeepsNodes 79.47
240 TestMultiNode/serial/DeleteNode 5.39
241 TestMultiNode/serial/StopMultiNode 28.61
242 TestMultiNode/serial/RestartMultiNode 50.36
243 TestMultiNode/serial/ValidateNameConflict 24.61
248 TestPreload 109.76
250 TestScheduledStopUnix 97.52
253 TestInsufficientStorage 10.08
254 TestRunningBinaryUpgrade 50.87
256 TestKubernetesUpgrade 301.19
257 TestMissingContainerUpgrade 104.01
259 TestPause/serial/Start 54.53
260 TestPause/serial/SecondStartNoReconfiguration 7.62
262 TestStoppedBinaryUpgrade/Setup 0.58
263 TestStoppedBinaryUpgrade/Upgrade 46.02
264 TestStoppedBinaryUpgrade/MinikubeLogs 1.06
273 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
274 TestNoKubernetes/serial/StartWithK8s 30.56
275 TestNoKubernetes/serial/StartWithStopK8s 20.73
283 TestNetworkPlugins/group/false 3.71
288 TestStartStop/group/old-k8s-version/serial/FirstStart 49.28
289 TestNoKubernetes/serial/Start 4.75
290 TestNoKubernetes/serial/VerifyK8sNotRunning 0.32
291 TestNoKubernetes/serial/ProfileList 34.63
292 TestNoKubernetes/serial/Stop 1.29
293 TestNoKubernetes/serial/StartNoArgs 6.66
294 TestStartStop/group/old-k8s-version/serial/DeployApp 9.24
295 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
297 TestStartStop/group/embed-certs/serial/FirstStart 43.08
299 TestStartStop/group/old-k8s-version/serial/Stop 16.08
300 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.24
301 TestStartStop/group/old-k8s-version/serial/SecondStart 48.15
303 TestStartStop/group/no-preload/serial/FirstStart 51.23
304 TestStartStop/group/embed-certs/serial/DeployApp 8.28
306 TestStartStop/group/embed-certs/serial/Stop 16.54
307 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
308 TestStartStop/group/embed-certs/serial/SecondStart 46.14
309 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
310 TestStartStop/group/no-preload/serial/DeployApp 8.25
311 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
312 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
315 TestStartStop/group/no-preload/serial/Stop 17.07
317 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 39.94
318 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.25
319 TestStartStop/group/no-preload/serial/SecondStart 49.33
320 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
321 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
322 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
324 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.25
326 TestStartStop/group/newest-cni/serial/FirstStart 27.16
328 TestStartStop/group/default-k8s-diff-port/serial/Stop 18.12
329 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
330 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
331 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
332 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 51.3
333 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.32
335 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/Stop 8.12
338 TestNetworkPlugins/group/auto/Start 75.31
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
340 TestStartStop/group/newest-cni/serial/SecondStart 13.78
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
345 TestNetworkPlugins/group/kindnet/Start 75.12
346 TestNetworkPlugins/group/calico/Start 52.52
347 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
348 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
349 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.29
351 TestNetworkPlugins/group/custom-flannel/Start 53.15
352 TestNetworkPlugins/group/auto/KubeletFlags 0.37
353 TestNetworkPlugins/group/auto/NetCatPod 9.26
354 TestNetworkPlugins/group/calico/ControllerPod 6.01
355 TestNetworkPlugins/group/calico/KubeletFlags 0.32
356 TestNetworkPlugins/group/calico/NetCatPod 8.2
357 TestNetworkPlugins/group/auto/DNS 0.19
358 TestNetworkPlugins/group/auto/Localhost 0.14
359 TestNetworkPlugins/group/auto/HairPin 0.14
360 TestNetworkPlugins/group/calico/DNS 0.14
361 TestNetworkPlugins/group/calico/Localhost 0.11
362 TestNetworkPlugins/group/calico/HairPin 0.1
363 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
364 TestNetworkPlugins/group/kindnet/KubeletFlags 0.36
365 TestNetworkPlugins/group/kindnet/NetCatPod 9.22
366 TestNetworkPlugins/group/enable-default-cni/Start 76.81
367 TestNetworkPlugins/group/flannel/Start 47.97
368 TestNetworkPlugins/group/kindnet/DNS 0.13
369 TestNetworkPlugins/group/kindnet/Localhost 0.11
370 TestNetworkPlugins/group/kindnet/HairPin 0.11
371 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.36
372 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.24
373 TestNetworkPlugins/group/custom-flannel/DNS 0.13
374 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
375 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
376 TestNetworkPlugins/group/bridge/Start 37.11
377 TestNetworkPlugins/group/flannel/ControllerPod 6.01
378 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
379 TestNetworkPlugins/group/flannel/NetCatPod 8.19
380 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
381 TestNetworkPlugins/group/bridge/NetCatPod 9.19
382 TestNetworkPlugins/group/flannel/DNS 0.11
383 TestNetworkPlugins/group/flannel/Localhost 0.09
384 TestNetworkPlugins/group/flannel/HairPin 0.1
385 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
386 TestNetworkPlugins/group/bridge/DNS 0.13
387 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.24
388 TestNetworkPlugins/group/bridge/Localhost 0.12
389 TestNetworkPlugins/group/bridge/HairPin 0.11
390 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
391 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
392 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
x
+
TestDownloadOnly/v1.28.0/json-events (5.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-515117 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-515117 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.121186542s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.12s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1027 18:56:21.690272  356415 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1027 18:56:21.690423  356415 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-515117
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-515117: exit status 85 (81.775504ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-515117 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-515117 │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 18:56:16
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 18:56:16.625638  356427 out.go:360] Setting OutFile to fd 1 ...
	I1027 18:56:16.625944  356427 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:56:16.625956  356427 out.go:374] Setting ErrFile to fd 2...
	I1027 18:56:16.625961  356427 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:56:16.626195  356427 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	W1027 18:56:16.626339  356427 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21801-352833/.minikube/config/config.json: open /home/jenkins/minikube-integration/21801-352833/.minikube/config/config.json: no such file or directory
	I1027 18:56:16.626925  356427 out.go:368] Setting JSON to true
	I1027 18:56:16.628302  356427 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5926,"bootTime":1761585451,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 18:56:16.628368  356427 start.go:141] virtualization: kvm guest
	I1027 18:56:16.630785  356427 out.go:99] [download-only-515117] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1027 18:56:16.630986  356427 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball: no such file or directory
	I1027 18:56:16.630990  356427 notify.go:220] Checking for updates...
	I1027 18:56:16.632915  356427 out.go:171] MINIKUBE_LOCATION=21801
	I1027 18:56:16.634645  356427 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 18:56:16.636292  356427 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 18:56:16.640713  356427 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-352833/.minikube
	I1027 18:56:16.642312  356427 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1027 18:56:16.645120  356427 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1027 18:56:16.645480  356427 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 18:56:16.671658  356427 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 18:56:16.671751  356427 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 18:56:16.730045  356427 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-10-27 18:56:16.719805621 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 18:56:16.730200  356427 docker.go:318] overlay module found
	I1027 18:56:16.731998  356427 out.go:99] Using the docker driver based on user configuration
	I1027 18:56:16.732032  356427 start.go:305] selected driver: docker
	I1027 18:56:16.732042  356427 start.go:925] validating driver "docker" against <nil>
	I1027 18:56:16.732182  356427 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 18:56:16.789560  356427 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-10-27 18:56:16.779119324 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 18:56:16.789759  356427 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1027 18:56:16.790476  356427 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1027 18:56:16.790663  356427 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1027 18:56:16.792833  356427 out.go:171] Using Docker driver with root privileges
	I1027 18:56:16.794301  356427 cni.go:84] Creating CNI manager for ""
	I1027 18:56:16.794415  356427 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1027 18:56:16.794438  356427 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1027 18:56:16.794535  356427 start.go:349] cluster config:
	{Name:download-only-515117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-515117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 18:56:16.795982  356427 out.go:99] Starting "download-only-515117" primary control-plane node in "download-only-515117" cluster
	I1027 18:56:16.796012  356427 cache.go:123] Beginning downloading kic base image for docker with crio
	I1027 18:56:16.797346  356427 out.go:99] Pulling base image v0.0.48-1760939008-21773 ...
	I1027 18:56:16.797385  356427 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1027 18:56:16.797472  356427 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1027 18:56:16.816102  356427 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1027 18:56:16.816380  356427 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1027 18:56:16.816482  356427 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1027 18:56:16.818887  356427 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1027 18:56:16.818919  356427 cache.go:58] Caching tarball of preloaded images
	I1027 18:56:16.819050  356427 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1027 18:56:16.821092  356427 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1027 18:56:16.821124  356427 preload.go:313] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1027 18:56:16.847287  356427 preload.go:290] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1027 18:56:16.847409  356427 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1027 18:56:20.079659  356427 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1027 18:56:21.026456  356427 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1027 18:56:21.026842  356427 profile.go:143] Saving config to /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/download-only-515117/config.json ...
	I1027 18:56:21.026873  356427 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/download-only-515117/config.json: {Name:mke2e54a2ba9e604db22ba4ec6d7a69ca02b535a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1027 18:56:21.027083  356427 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1027 18:56:21.027390  356427 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21801-352833/.minikube/cache/linux/amd64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-515117 host does not exist
	  To start a cluster, run: "minikube start -p download-only-515117"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-515117
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-339078 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-339078 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.522106528s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1027 18:56:25.696885  356415 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1027 18:56:25.696939  356415 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21801-352833/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-339078
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-339078: exit status 85 (81.638561ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-515117 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-515117 │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:56 UTC │
	│ delete  │ -p download-only-515117                                                                                                                                                   │ download-only-515117 │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │ 27 Oct 25 18:56 UTC │
	│ start   │ -o=json --download-only -p download-only-339078 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-339078 │ jenkins │ v1.37.0 │ 27 Oct 25 18:56 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/27 18:56:22
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1027 18:56:22.231353  356781 out.go:360] Setting OutFile to fd 1 ...
	I1027 18:56:22.231495  356781 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:56:22.231502  356781 out.go:374] Setting ErrFile to fd 2...
	I1027 18:56:22.231508  356781 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 18:56:22.231709  356781 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 18:56:22.232213  356781 out.go:368] Setting JSON to true
	I1027 18:56:22.233159  356781 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":5931,"bootTime":1761585451,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 18:56:22.233256  356781 start.go:141] virtualization: kvm guest
	I1027 18:56:22.235325  356781 out.go:99] [download-only-339078] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 18:56:22.235551  356781 notify.go:220] Checking for updates...
	I1027 18:56:22.236970  356781 out.go:171] MINIKUBE_LOCATION=21801
	I1027 18:56:22.238661  356781 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 18:56:22.240169  356781 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 18:56:22.241654  356781 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-352833/.minikube
	I1027 18:56:22.243010  356781 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1027 18:56:22.245572  356781 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1027 18:56:22.245843  356781 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 18:56:22.271101  356781 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 18:56:22.271209  356781 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 18:56:22.330772  356781 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-27 18:56:22.319308424 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 18:56:22.330887  356781 docker.go:318] overlay module found
	I1027 18:56:22.332828  356781 out.go:99] Using the docker driver based on user configuration
	I1027 18:56:22.332867  356781 start.go:305] selected driver: docker
	I1027 18:56:22.332874  356781 start.go:925] validating driver "docker" against <nil>
	I1027 18:56:22.332976  356781 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 18:56:22.389226  356781 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-27 18:56:22.378967086 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 18:56:22.389495  356781 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1027 18:56:22.390053  356781 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1027 18:56:22.390260  356781 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1027 18:56:22.392727  356781 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-339078 host does not exist
	  To start a cluster, run: "minikube start -p download-only-339078"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-339078
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.46s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-738250 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-738250" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-738250
--- PASS: TestDownloadOnlyKic (0.46s)

                                                
                                    
x
+
TestBinaryMirror (0.87s)

                                                
                                                
=== RUN   TestBinaryMirror
I1027 18:56:26.973097  356415 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-394940 --alsologtostderr --binary-mirror http://127.0.0.1:39569 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-394940" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-394940
--- PASS: TestBinaryMirror (0.87s)

                                                
                                    
x
+
TestOffline (57.5s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-221701 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-221701 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (54.59138384s)
helpers_test.go:175: Cleaning up "offline-crio-221701" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-221701
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-221701: (2.904282444s)
--- PASS: TestOffline (57.50s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-589824
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-589824: exit status 85 (71.85747ms)

                                                
                                                
-- stdout --
	* Profile "addons-589824" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-589824"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-589824
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-589824: exit status 85 (72.378023ms)

                                                
                                                
-- stdout --
	* Profile "addons-589824" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-589824"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (152.56s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-589824 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-589824 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m32.557401381s)
--- PASS: TestAddons/Setup (152.56s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-589824 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-589824 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.44s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-589824 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-589824 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [dc1de28b-fce1-4ef6-a84d-5048ef8d2018] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [dc1de28b-fce1-4ef6-a84d-5048ef8d2018] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.004129571s
addons_test.go:694: (dbg) Run:  kubectl --context addons-589824 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-589824 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-589824 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.44s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.75s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-589824
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-589824: (16.433804484s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-589824
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-589824
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-589824
--- PASS: TestAddons/StoppedEnableDisable (16.75s)

                                                
                                    
x
+
TestCertOptions (34.07s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-638768 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-638768 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (25.202334684s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-638768 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-638768 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-638768 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-638768" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-638768
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-638768: (7.999075804s)
--- PASS: TestCertOptions (34.07s)

                                                
                                    
x
+
TestCertExpiration (217.3s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-368442 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-368442 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (27.980220722s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-368442 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-368442 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (6.698687402s)
helpers_test.go:175: Cleaning up "cert-expiration-368442" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-368442
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-368442: (2.619988429s)
--- PASS: TestCertExpiration (217.30s)

                                                
                                    
x
+
TestForceSystemdFlag (40.89s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-422872 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-422872 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.20006453s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-422872 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-422872" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-422872
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-422872: (4.341416596s)
--- PASS: TestForceSystemdFlag (40.89s)

                                                
                                    
x
+
TestForceSystemdEnv (34.21s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-282715 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-282715 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (31.286438501s)
helpers_test.go:175: Cleaning up "force-systemd-env-282715" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-282715
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-282715: (2.91824877s)
--- PASS: TestForceSystemdEnv (34.21s)

                                                
                                    
x
+
TestErrorSpam/setup (21.39s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-203730 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-203730 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-203730 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-203730 --driver=docker  --container-runtime=crio: (21.392095278s)
--- PASS: TestErrorSpam/setup (21.39s)

                                                
                                    
x
+
TestErrorSpam/start (0.72s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-203730 --log_dir /tmp/nospam-203730 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-203730 --log_dir /tmp/nospam-203730 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-203730 --log_dir /tmp/nospam-203730 start --dry-run
--- PASS: TestErrorSpam/start (0.72s)

                                                
                                    
x
+
TestErrorSpam/status (1.03s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-203730 --log_dir /tmp/nospam-203730 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-203730 --log_dir /tmp/nospam-203730 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-203730 --log_dir /tmp/nospam-203730 status
--- PASS: TestErrorSpam/status (1.03s)

                                                
                                    
x
+
TestErrorSpam/pause (6.95s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-203730 --log_dir /tmp/nospam-203730 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-203730 --log_dir /tmp/nospam-203730 pause: exit status 80 (2.376997664s)

                                                
                                                
-- stdout --
	* Pausing node nospam-203730 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:02:26Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-203730 --log_dir /tmp/nospam-203730 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-203730 --log_dir /tmp/nospam-203730 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-203730 --log_dir /tmp/nospam-203730 pause: exit status 80 (2.200985419s)

                                                
                                                
-- stdout --
	* Pausing node nospam-203730 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:02:28Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-203730 --log_dir /tmp/nospam-203730 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-203730 --log_dir /tmp/nospam-203730 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-203730 --log_dir /tmp/nospam-203730 pause: exit status 80 (2.375633239s)

                                                
                                                
-- stdout --
	* Pausing node nospam-203730 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:02:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-203730 --log_dir /tmp/nospam-203730 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.95s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.65s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-203730 --log_dir /tmp/nospam-203730 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-203730 --log_dir /tmp/nospam-203730 unpause: exit status 80 (1.774969796s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-203730 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:02:32Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-203730 --log_dir /tmp/nospam-203730 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-203730 --log_dir /tmp/nospam-203730 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-203730 --log_dir /tmp/nospam-203730 unpause: exit status 80 (2.163430499s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-203730 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:02:34Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-203730 --log_dir /tmp/nospam-203730 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-203730 --log_dir /tmp/nospam-203730 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-203730 --log_dir /tmp/nospam-203730 unpause: exit status 80 (1.709648039s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-203730 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-27T19:02:36Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-203730 --log_dir /tmp/nospam-203730 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.65s)

                                                
                                    
x
+
TestErrorSpam/stop (2.72s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-203730 --log_dir /tmp/nospam-203730 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-203730 --log_dir /tmp/nospam-203730 stop: (2.491108608s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-203730 --log_dir /tmp/nospam-203730 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-203730 --log_dir /tmp/nospam-203730 stop
--- PASS: TestErrorSpam/stop (2.72s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21801-352833/.minikube/files/etc/test/nested/copy/356415/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (37.55s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-051715 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-051715 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (37.550223357s)
--- PASS: TestFunctional/serial/StartWithProxy (37.55s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.46s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1027 19:03:21.844338  356415 config.go:182] Loaded profile config "functional-051715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-051715 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-051715 --alsologtostderr -v=8: (6.455355486s)
functional_test.go:678: soft start took 6.456147572s for "functional-051715" cluster.
I1027 19:03:28.300063  356415 config.go:182] Loaded profile config "functional-051715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.46s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-051715 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-051715 cache add registry.k8s.io/pause:3.1: (1.011601049s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-051715 cache add registry.k8s.io/pause:3.3: (1.104880546s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-051715 /tmp/TestFunctionalserialCacheCmdcacheadd_local867408079/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 cache add minikube-local-cache-test:functional-051715
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 cache delete minikube-local-cache-test:functional-051715
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-051715
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-051715 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (303.684467ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 kubectl -- --context functional-051715 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-051715 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.13s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-051715 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1027 19:04:01.062753  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:04:01.069232  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:04:01.080685  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:04:01.102167  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:04:01.143635  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:04:01.225256  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:04:01.386862  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:04:01.709084  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:04:02.350953  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:04:03.632751  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:04:06.195736  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:04:11.317549  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-051715 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.127381616s)
functional_test.go:776: restart took 42.127529525s for "functional-051715" cluster.
I1027 19:04:17.422758  356415 config.go:182] Loaded profile config "functional-051715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (42.13s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-051715 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-051715 logs: (1.340644783s)
--- PASS: TestFunctional/serial/LogsCmd (1.34s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 logs --file /tmp/TestFunctionalserialLogsFileCmd2085604672/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-051715 logs --file /tmp/TestFunctionalserialLogsFileCmd2085604672/001/logs.txt: (1.345756531s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.35s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.85s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-051715 apply -f testdata/invalidsvc.yaml
E1027 19:04:21.559643  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-051715
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-051715: exit status 115 (375.741915ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30810 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-051715 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.85s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-051715 config get cpus: exit status 14 (87.796589ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-051715 config get cpus: exit status 14 (82.464346ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-051715 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-051715 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 394499: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.35s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-051715 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-051715 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (199.880506ms)

                                                
                                                
-- stdout --
	* [functional-051715] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21801
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21801-352833/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-352833/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:04:50.045896  394774 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:04:50.046061  394774 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:04:50.046072  394774 out.go:374] Setting ErrFile to fd 2...
	I1027 19:04:50.046076  394774 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:04:50.046410  394774 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 19:04:50.046970  394774 out.go:368] Setting JSON to false
	I1027 19:04:50.048249  394774 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6439,"bootTime":1761585451,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 19:04:50.048358  394774 start.go:141] virtualization: kvm guest
	I1027 19:04:50.050884  394774 out.go:179] * [functional-051715] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 19:04:50.052605  394774 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 19:04:50.052621  394774 notify.go:220] Checking for updates...
	I1027 19:04:50.055168  394774 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 19:04:50.056766  394774 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:04:50.058123  394774 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-352833/.minikube
	I1027 19:04:50.059509  394774 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 19:04:50.060763  394774 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 19:04:50.062594  394774 config.go:182] Loaded profile config "functional-051715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:04:50.063365  394774 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 19:04:50.092034  394774 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 19:04:50.092163  394774 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:04:50.168698  394774 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-27 19:04:50.156530031 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:04:50.168829  394774 docker.go:318] overlay module found
	I1027 19:04:50.170759  394774 out.go:179] * Using the docker driver based on existing profile
	I1027 19:04:50.172057  394774 start.go:305] selected driver: docker
	I1027 19:04:50.172080  394774 start.go:925] validating driver "docker" against &{Name:functional-051715 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-051715 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:04:50.172239  394774 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 19:04:50.174499  394774 out.go:203] 
	W1027 19:04:50.176117  394774 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1027 19:04:50.177490  394774 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-051715 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-051715 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-051715 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (188.308018ms)

                                                
                                                
-- stdout --
	* [functional-051715] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21801
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21801-352833/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-352833/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:04:25.975658  389511 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:04:25.975761  389511 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:04:25.975769  389511 out.go:374] Setting ErrFile to fd 2...
	I1027 19:04:25.975773  389511 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:04:25.976095  389511 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 19:04:25.976637  389511 out.go:368] Setting JSON to false
	I1027 19:04:25.977744  389511 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6415,"bootTime":1761585451,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 19:04:25.977845  389511 start.go:141] virtualization: kvm guest
	I1027 19:04:25.980028  389511 out.go:179] * [functional-051715] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1027 19:04:25.981539  389511 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 19:04:25.981559  389511 notify.go:220] Checking for updates...
	I1027 19:04:25.984286  389511 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 19:04:25.985686  389511 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:04:25.986982  389511 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-352833/.minikube
	I1027 19:04:25.988395  389511 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 19:04:25.989753  389511 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 19:04:25.991385  389511 config.go:182] Loaded profile config "functional-051715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:04:25.991928  389511 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 19:04:26.017609  389511 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 19:04:26.017747  389511 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:04:26.082419  389511 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-27 19:04:26.070069018 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:04:26.082524  389511 docker.go:318] overlay module found
	I1027 19:04:26.084746  389511 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1027 19:04:26.086106  389511 start.go:305] selected driver: docker
	I1027 19:04:26.086128  389511 start.go:925] validating driver "docker" against &{Name:functional-051715 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-051715 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1027 19:04:26.086247  389511 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 19:04:26.088439  389511 out.go:203] 
	W1027 19:04:26.090123  389511 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1027 19:04:26.091810  389511 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [07b25993-d66f-4340-b1d8-d78f5a2d932e] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003612499s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-051715 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-051715 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-051715 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-051715 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [ae7632ad-1931-44a9-8722-647e1211f50b] Pending
helpers_test.go:352: "sp-pod" [ae7632ad-1931-44a9-8722-647e1211f50b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [ae7632ad-1931-44a9-8722-647e1211f50b] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.003275486s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-051715 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-051715 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-051715 apply -f testdata/storage-provisioner/pod.yaml
I1027 19:04:45.232074  356415 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [8688089e-ff62-4626-ac9d-4eda10f2755b] Pending
helpers_test.go:352: "sp-pod" [8688089e-ff62-4626-ac9d-4eda10f2755b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [8688089e-ff62-4626-ac9d-4eda10f2755b] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003849829s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-051715 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.56s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 ssh -n functional-051715 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 cp functional-051715:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3857168684/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 ssh -n functional-051715 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 ssh -n functional-051715 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (17.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-051715 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-bdflz" [4980654e-b2b0-4e99-a523-a93f718f487f] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-bdflz" [4980654e-b2b0-4e99-a523-a93f718f487f] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 14.003965763s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-051715 exec mysql-5bb876957f-bdflz -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-051715 exec mysql-5bb876957f-bdflz -- mysql -ppassword -e "show databases;": exit status 1 (99.458365ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1027 19:04:39.115640  356415 retry.go:31] will retry after 1.145550035s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-051715 exec mysql-5bb876957f-bdflz -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-051715 exec mysql-5bb876957f-bdflz -- mysql -ppassword -e "show databases;": exit status 1 (90.514839ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1027 19:04:40.352396  356415 retry.go:31] will retry after 2.149694838s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-051715 exec mysql-5bb876957f-bdflz -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (17.78s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/356415/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 ssh "sudo cat /etc/test/nested/copy/356415/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/356415.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 ssh "sudo cat /etc/ssl/certs/356415.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/356415.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 ssh "sudo cat /usr/share/ca-certificates/356415.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3564152.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 ssh "sudo cat /etc/ssl/certs/3564152.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3564152.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 ssh "sudo cat /usr/share/ca-certificates/3564152.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-051715 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-051715 ssh "sudo systemctl is-active docker": exit status 1 (371.045096ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-051715 ssh "sudo systemctl is-active containerd": exit status 1 (323.742162ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-051715 /tmp/TestFunctionalparallelMountCmdany-port1342137203/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1761591866437206783" to /tmp/TestFunctionalparallelMountCmdany-port1342137203/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1761591866437206783" to /tmp/TestFunctionalparallelMountCmdany-port1342137203/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1761591866437206783" to /tmp/TestFunctionalparallelMountCmdany-port1342137203/001/test-1761591866437206783
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-051715 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (402.186549ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1027 19:04:26.839958  356415 retry.go:31] will retry after 597.884547ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 27 19:04 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 27 19:04 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 27 19:04 test-1761591866437206783
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 ssh cat /mount-9p/test-1761591866437206783
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-051715 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [e0e8a199-81b9-49b0-b49b-985787a470d0] Pending
helpers_test.go:352: "busybox-mount" [e0e8a199-81b9-49b0-b49b-985787a470d0] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [e0e8a199-81b9-49b0-b49b-985787a470d0] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [e0e8a199-81b9-49b0-b49b-985787a470d0] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003502013s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-051715 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 ssh stat /mount-9p/created-by-pod
I1027 19:04:35.174281  356415 detect.go:223] nested VM detected
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-051715 /tmp/TestFunctionalparallelMountCmdany-port1342137203/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "431.518832ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "79.301424ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "432.967512ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "80.50529ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-051715 /tmp/TestFunctionalparallelMountCmdspecific-port223497343/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-051715 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (304.339985ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1027 19:04:36.122294  356415 retry.go:31] will retry after 320.931491ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-051715 /tmp/TestFunctionalparallelMountCmdspecific-port223497343/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-051715 ssh "sudo umount -f /mount-9p": exit status 1 (355.553265ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-051715 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-051715 /tmp/TestFunctionalparallelMountCmdspecific-port223497343/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-051715 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3506622752/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-051715 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3506622752/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-051715 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3506622752/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-051715 ssh "findmnt -T" /mount1: exit status 1 (424.658287ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1027 19:04:38.133236  356415 retry.go:31] will retry after 436.276338ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-051715 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-051715 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3506622752/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-051715 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3506622752/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-051715 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3506622752/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-051715 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-051715 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-051715 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-051715 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 393892: os: process already finished
helpers_test.go:525: unable to kill pid 393694: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-051715 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-051715 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [e7c0be56-6659-4e65-bd0a-3a24ad62fc1b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
E1027 19:04:42.041151  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "nginx-svc" [e7c0be56-6659-4e65-bd0a-3a24ad62fc1b] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.005002266s
I1027 19:04:49.800551  356415 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.21s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-051715 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.98.98.185 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-051715 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-051715 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-051715 image ls --format short --alsologtostderr:
I1027 19:04:57.391852  396432 out.go:360] Setting OutFile to fd 1 ...
I1027 19:04:57.391978  396432 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:04:57.391987  396432 out.go:374] Setting ErrFile to fd 2...
I1027 19:04:57.391990  396432 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:04:57.392211  396432 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
I1027 19:04:57.392854  396432 config.go:182] Loaded profile config "functional-051715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 19:04:57.392973  396432 config.go:182] Loaded profile config "functional-051715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 19:04:57.393405  396432 cli_runner.go:164] Run: docker container inspect functional-051715 --format={{.State.Status}}
I1027 19:04:57.413767  396432 ssh_runner.go:195] Run: systemctl --version
I1027 19:04:57.413832  396432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-051715
I1027 19:04:57.433033  396432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/functional-051715/id_rsa Username:docker}
I1027 19:04:57.538299  396432 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-051715 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/library/nginx                 │ alpine             │ 5e7abcdd20216 │ 54.2MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/nginx                 │ latest             │ 657fdcd1c3659 │ 155MB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-051715 image ls --format table --alsologtostderr:
I1027 19:04:58.131296  396781 out.go:360] Setting OutFile to fd 1 ...
I1027 19:04:58.131404  396781 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:04:58.131412  396781 out.go:374] Setting ErrFile to fd 2...
I1027 19:04:58.131416  396781 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:04:58.131634  396781 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
I1027 19:04:58.132306  396781 config.go:182] Loaded profile config "functional-051715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 19:04:58.132405  396781 config.go:182] Loaded profile config "functional-051715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 19:04:58.132784  396781 cli_runner.go:164] Run: docker container inspect functional-051715 --format={{.State.Status}}
I1027 19:04:58.152532  396781 ssh_runner.go:195] Run: systemctl --version
I1027 19:04:58.152594  396781 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-051715
I1027 19:04:58.171722  396781 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/functional-051715/id_rsa Username:docker}
I1027 19:04:58.273284  396781 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-051715 image ls --format json --alsologtostderr:
[{"id":"5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5","repoDigests":["docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22","docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54168570"},{"id":"657fdcd1c3659cf57cfaa13f40842e0a26b49ec9654d48fdefee9fc8259b4aab","repoDigests":["docker.io/library/nginx@sha256:029d4461bd98f124e531380505ceea2072418fdf28752aa73b7b273ba3048903","docker.io/library/nginx@sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8"],"repoTags":["docker.io/library/nginx:latest"],"size":"155467611"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a1
41c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id"
:"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a
1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279
c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause
:3.10.1"],"size":"742092"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-051715 image ls --format json --alsologtostderr:
I1027 19:04:57.881569  396697 out.go:360] Setting OutFile to fd 1 ...
I1027 19:04:57.881897  396697 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:04:57.881913  396697 out.go:374] Setting ErrFile to fd 2...
I1027 19:04:57.881919  396697 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:04:57.882209  396697 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
I1027 19:04:57.883025  396697 config.go:182] Loaded profile config "functional-051715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 19:04:57.883174  396697 config.go:182] Loaded profile config "functional-051715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 19:04:57.883606  396697 cli_runner.go:164] Run: docker container inspect functional-051715 --format={{.State.Status}}
I1027 19:04:57.907693  396697 ssh_runner.go:195] Run: systemctl --version
I1027 19:04:57.907760  396697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-051715
I1027 19:04:57.929281  396697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/functional-051715/id_rsa Username:docker}
I1027 19:04:58.032303  396697 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-051715 image ls --format yaml --alsologtostderr:
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5
repoDigests:
- docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22
- docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e
repoTags:
- docker.io/library/nginx:alpine
size: "54168570"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 657fdcd1c3659cf57cfaa13f40842e0a26b49ec9654d48fdefee9fc8259b4aab
repoDigests:
- docker.io/library/nginx@sha256:029d4461bd98f124e531380505ceea2072418fdf28752aa73b7b273ba3048903
- docker.io/library/nginx@sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8
repoTags:
- docker.io/library/nginx:latest
size: "155467611"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-051715 image ls --format yaml --alsologtostderr:
I1027 19:04:57.634722  396595 out.go:360] Setting OutFile to fd 1 ...
I1027 19:04:57.635000  396595 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:04:57.635011  396595 out.go:374] Setting ErrFile to fd 2...
I1027 19:04:57.635017  396595 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:04:57.635300  396595 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
I1027 19:04:57.635937  396595 config.go:182] Loaded profile config "functional-051715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 19:04:57.636055  396595 config.go:182] Loaded profile config "functional-051715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 19:04:57.636493  396595 cli_runner.go:164] Run: docker container inspect functional-051715 --format={{.State.Status}}
I1027 19:04:57.658509  396595 ssh_runner.go:195] Run: systemctl --version
I1027 19:04:57.658581  396595 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-051715
I1027 19:04:57.680021  396595 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/functional-051715/id_rsa Username:docker}
I1027 19:04:57.782445  396595 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-051715 ssh pgrep buildkitd: exit status 1 (297.357523ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 image build -t localhost/my-image:functional-051715 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-051715 image build -t localhost/my-image:functional-051715 testdata/build --alsologtostderr: (2.403007059s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-051715 image build -t localhost/my-image:functional-051715 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 04162dfa327
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-051715
--> 6eec6e3106d
Successfully tagged localhost/my-image:functional-051715
6eec6e3106d7b237ce2e849e994041ec19c44b32464b1e3ae593b763f3d68b17
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-051715 image build -t localhost/my-image:functional-051715 testdata/build --alsologtostderr:
I1027 19:04:57.712030  396624 out.go:360] Setting OutFile to fd 1 ...
I1027 19:04:57.712349  396624 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:04:57.712361  396624 out.go:374] Setting ErrFile to fd 2...
I1027 19:04:57.712366  396624 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1027 19:04:57.712642  396624 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
I1027 19:04:57.713484  396624 config.go:182] Loaded profile config "functional-051715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 19:04:57.714233  396624 config.go:182] Loaded profile config "functional-051715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1027 19:04:57.714734  396624 cli_runner.go:164] Run: docker container inspect functional-051715 --format={{.State.Status}}
I1027 19:04:57.734079  396624 ssh_runner.go:195] Run: systemctl --version
I1027 19:04:57.734197  396624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-051715
I1027 19:04:57.752574  396624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33150 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/functional-051715/id_rsa Username:docker}
I1027 19:04:57.855034  396624 build_images.go:161] Building image from path: /tmp/build.1501277061.tar
I1027 19:04:57.855106  396624 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1027 19:04:57.864589  396624 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1501277061.tar
I1027 19:04:57.869600  396624 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1501277061.tar: stat -c "%s %y" /var/lib/minikube/build/build.1501277061.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1501277061.tar': No such file or directory
I1027 19:04:57.869636  396624 ssh_runner.go:362] scp /tmp/build.1501277061.tar --> /var/lib/minikube/build/build.1501277061.tar (3072 bytes)
I1027 19:04:57.893537  396624 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1501277061
I1027 19:04:57.904768  396624 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1501277061 -xf /var/lib/minikube/build/build.1501277061.tar
I1027 19:04:57.915161  396624 crio.go:315] Building image: /var/lib/minikube/build/build.1501277061
I1027 19:04:57.915254  396624 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-051715 /var/lib/minikube/build/build.1501277061 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1027 19:05:00.028408  396624 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-051715 /var/lib/minikube/build/build.1501277061 --cgroup-manager=cgroupfs: (2.113115624s)
I1027 19:05:00.028502  396624 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1501277061
I1027 19:05:00.037562  396624 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1501277061.tar
I1027 19:05:00.046014  396624 build_images.go:217] Built localhost/my-image:functional-051715 from /tmp/build.1501277061.tar
I1027 19:05:00.046047  396624 build_images.go:133] succeeded building to: functional-051715
I1027 19:05:00.046052  396624 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 image ls
E1027 19:05:23.003401  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:06:44.925311  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:09:01.063091  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:09:28.766980  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:14:01.062439  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
2025/10/27 19:04:50 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.267283409s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-051715
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 image rm kicbase/echo-server:functional-051715 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-051715 service list: (1.718389869s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-051715 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-051715 service list -o json: (1.713782397s)
functional_test.go:1504: Took "1.713874375s" to run "out/minikube-linux-amd64 -p functional-051715 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.71s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-051715
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-051715
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-051715
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (123.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-255909 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m2.476284455s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (123.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-255909 kubectl -- rollout status deployment/busybox: (3.410924094s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 kubectl -- exec busybox-7b57f96db7-85m7g -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 kubectl -- exec busybox-7b57f96db7-9qrcb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 kubectl -- exec busybox-7b57f96db7-qwmrp -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 kubectl -- exec busybox-7b57f96db7-85m7g -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 kubectl -- exec busybox-7b57f96db7-9qrcb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 kubectl -- exec busybox-7b57f96db7-qwmrp -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 kubectl -- exec busybox-7b57f96db7-85m7g -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 kubectl -- exec busybox-7b57f96db7-9qrcb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 kubectl -- exec busybox-7b57f96db7-qwmrp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 kubectl -- exec busybox-7b57f96db7-85m7g -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 kubectl -- exec busybox-7b57f96db7-85m7g -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 kubectl -- exec busybox-7b57f96db7-9qrcb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 kubectl -- exec busybox-7b57f96db7-9qrcb -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 kubectl -- exec busybox-7b57f96db7-qwmrp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 kubectl -- exec busybox-7b57f96db7-qwmrp -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (27.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-255909 node add --alsologtostderr -v 5: (26.723806579s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (27.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-255909 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 cp testdata/cp-test.txt ha-255909:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 ssh -n ha-255909 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 cp ha-255909:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3906232716/001/cp-test_ha-255909.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 ssh -n ha-255909 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 cp ha-255909:/home/docker/cp-test.txt ha-255909-m02:/home/docker/cp-test_ha-255909_ha-255909-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 ssh -n ha-255909 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 ssh -n ha-255909-m02 "sudo cat /home/docker/cp-test_ha-255909_ha-255909-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 cp ha-255909:/home/docker/cp-test.txt ha-255909-m03:/home/docker/cp-test_ha-255909_ha-255909-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 ssh -n ha-255909 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 ssh -n ha-255909-m03 "sudo cat /home/docker/cp-test_ha-255909_ha-255909-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 cp ha-255909:/home/docker/cp-test.txt ha-255909-m04:/home/docker/cp-test_ha-255909_ha-255909-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 ssh -n ha-255909 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 ssh -n ha-255909-m04 "sudo cat /home/docker/cp-test_ha-255909_ha-255909-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 cp testdata/cp-test.txt ha-255909-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 ssh -n ha-255909-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 cp ha-255909-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3906232716/001/cp-test_ha-255909-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 ssh -n ha-255909-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 cp ha-255909-m02:/home/docker/cp-test.txt ha-255909:/home/docker/cp-test_ha-255909-m02_ha-255909.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 ssh -n ha-255909-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 ssh -n ha-255909 "sudo cat /home/docker/cp-test_ha-255909-m02_ha-255909.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 cp ha-255909-m02:/home/docker/cp-test.txt ha-255909-m03:/home/docker/cp-test_ha-255909-m02_ha-255909-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 ssh -n ha-255909-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 ssh -n ha-255909-m03 "sudo cat /home/docker/cp-test_ha-255909-m02_ha-255909-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 cp ha-255909-m02:/home/docker/cp-test.txt ha-255909-m04:/home/docker/cp-test_ha-255909-m02_ha-255909-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 ssh -n ha-255909-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 ssh -n ha-255909-m04 "sudo cat /home/docker/cp-test_ha-255909-m02_ha-255909-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 cp testdata/cp-test.txt ha-255909-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 ssh -n ha-255909-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 cp ha-255909-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3906232716/001/cp-test_ha-255909-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 ssh -n ha-255909-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 cp ha-255909-m03:/home/docker/cp-test.txt ha-255909:/home/docker/cp-test_ha-255909-m03_ha-255909.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 ssh -n ha-255909-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 ssh -n ha-255909 "sudo cat /home/docker/cp-test_ha-255909-m03_ha-255909.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 cp ha-255909-m03:/home/docker/cp-test.txt ha-255909-m02:/home/docker/cp-test_ha-255909-m03_ha-255909-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 ssh -n ha-255909-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 ssh -n ha-255909-m02 "sudo cat /home/docker/cp-test_ha-255909-m03_ha-255909-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 cp ha-255909-m03:/home/docker/cp-test.txt ha-255909-m04:/home/docker/cp-test_ha-255909-m03_ha-255909-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 ssh -n ha-255909-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 ssh -n ha-255909-m04 "sudo cat /home/docker/cp-test_ha-255909-m03_ha-255909-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 cp testdata/cp-test.txt ha-255909-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 ssh -n ha-255909-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 cp ha-255909-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3906232716/001/cp-test_ha-255909-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 ssh -n ha-255909-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 cp ha-255909-m04:/home/docker/cp-test.txt ha-255909:/home/docker/cp-test_ha-255909-m04_ha-255909.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 ssh -n ha-255909-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 ssh -n ha-255909 "sudo cat /home/docker/cp-test_ha-255909-m04_ha-255909.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 cp ha-255909-m04:/home/docker/cp-test.txt ha-255909-m02:/home/docker/cp-test_ha-255909-m04_ha-255909-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 ssh -n ha-255909-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 ssh -n ha-255909-m02 "sudo cat /home/docker/cp-test_ha-255909-m04_ha-255909-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 cp ha-255909-m04:/home/docker/cp-test.txt ha-255909-m03:/home/docker/cp-test_ha-255909-m04_ha-255909-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 ssh -n ha-255909-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 ssh -n ha-255909-m03 "sudo cat /home/docker/cp-test_ha-255909-m04_ha-255909-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (19.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-255909 node stop m02 --alsologtostderr -v 5: (19.163412383s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-255909 status --alsologtostderr -v 5: exit status 7 (797.588474ms)

                                                
                                                
-- stdout --
	ha-255909
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-255909-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-255909-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-255909-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:18:14.172954  421288 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:18:14.173291  421288 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:18:14.173302  421288 out.go:374] Setting ErrFile to fd 2...
	I1027 19:18:14.173307  421288 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:18:14.173565  421288 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 19:18:14.173760  421288 out.go:368] Setting JSON to false
	I1027 19:18:14.173798  421288 mustload.go:65] Loading cluster: ha-255909
	I1027 19:18:14.173917  421288 notify.go:220] Checking for updates...
	I1027 19:18:14.174254  421288 config.go:182] Loaded profile config "ha-255909": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:18:14.174275  421288 status.go:174] checking status of ha-255909 ...
	I1027 19:18:14.174853  421288 cli_runner.go:164] Run: docker container inspect ha-255909 --format={{.State.Status}}
	I1027 19:18:14.198879  421288 status.go:371] ha-255909 host status = "Running" (err=<nil>)
	I1027 19:18:14.198938  421288 host.go:66] Checking if "ha-255909" exists ...
	I1027 19:18:14.199395  421288 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-255909
	I1027 19:18:14.219965  421288 host.go:66] Checking if "ha-255909" exists ...
	I1027 19:18:14.220377  421288 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 19:18:14.220445  421288 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-255909
	I1027 19:18:14.241656  421288 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33155 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/ha-255909/id_rsa Username:docker}
	I1027 19:18:14.343306  421288 ssh_runner.go:195] Run: systemctl --version
	I1027 19:18:14.350661  421288 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:18:14.364973  421288 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:18:14.431582  421288 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:71 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-27 19:18:14.419871202 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:18:14.432221  421288 kubeconfig.go:125] found "ha-255909" server: "https://192.168.49.254:8443"
	I1027 19:18:14.432253  421288 api_server.go:166] Checking apiserver status ...
	I1027 19:18:14.432292  421288 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:18:14.444975  421288 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1235/cgroup
	W1027 19:18:14.454711  421288 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1235/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1027 19:18:14.454773  421288 ssh_runner.go:195] Run: ls
	I1027 19:18:14.459601  421288 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1027 19:18:14.466587  421288 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1027 19:18:14.466624  421288 status.go:463] ha-255909 apiserver status = Running (err=<nil>)
	I1027 19:18:14.466640  421288 status.go:176] ha-255909 status: &{Name:ha-255909 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 19:18:14.466663  421288 status.go:174] checking status of ha-255909-m02 ...
	I1027 19:18:14.467020  421288 cli_runner.go:164] Run: docker container inspect ha-255909-m02 --format={{.State.Status}}
	I1027 19:18:14.487463  421288 status.go:371] ha-255909-m02 host status = "Stopped" (err=<nil>)
	I1027 19:18:14.487491  421288 status.go:384] host is not running, skipping remaining checks
	I1027 19:18:14.487500  421288 status.go:176] ha-255909-m02 status: &{Name:ha-255909-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 19:18:14.487540  421288 status.go:174] checking status of ha-255909-m03 ...
	I1027 19:18:14.487885  421288 cli_runner.go:164] Run: docker container inspect ha-255909-m03 --format={{.State.Status}}
	I1027 19:18:14.508564  421288 status.go:371] ha-255909-m03 host status = "Running" (err=<nil>)
	I1027 19:18:14.508599  421288 host.go:66] Checking if "ha-255909-m03" exists ...
	I1027 19:18:14.508884  421288 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-255909-m03
	I1027 19:18:14.528062  421288 host.go:66] Checking if "ha-255909-m03" exists ...
	I1027 19:18:14.528387  421288 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 19:18:14.528456  421288 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-255909-m03
	I1027 19:18:14.548884  421288 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33165 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/ha-255909-m03/id_rsa Username:docker}
	I1027 19:18:14.652479  421288 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:18:14.667776  421288 kubeconfig.go:125] found "ha-255909" server: "https://192.168.49.254:8443"
	I1027 19:18:14.667808  421288 api_server.go:166] Checking apiserver status ...
	I1027 19:18:14.667852  421288 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:18:14.680265  421288 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup
	W1027 19:18:14.689892  421288 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1163/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1027 19:18:14.689954  421288 ssh_runner.go:195] Run: ls
	I1027 19:18:14.694378  421288 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1027 19:18:14.702345  421288 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1027 19:18:14.702388  421288 status.go:463] ha-255909-m03 apiserver status = Running (err=<nil>)
	I1027 19:18:14.702400  421288 status.go:176] ha-255909-m03 status: &{Name:ha-255909-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 19:18:14.702424  421288 status.go:174] checking status of ha-255909-m04 ...
	I1027 19:18:14.702767  421288 cli_runner.go:164] Run: docker container inspect ha-255909-m04 --format={{.State.Status}}
	I1027 19:18:14.732818  421288 status.go:371] ha-255909-m04 host status = "Running" (err=<nil>)
	I1027 19:18:14.732845  421288 host.go:66] Checking if "ha-255909-m04" exists ...
	I1027 19:18:14.733174  421288 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-255909-m04
	I1027 19:18:14.754031  421288 host.go:66] Checking if "ha-255909-m04" exists ...
	I1027 19:18:14.754360  421288 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 19:18:14.754450  421288 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-255909-m04
	I1027 19:18:14.773942  421288 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33170 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/ha-255909-m04/id_rsa Username:docker}
	I1027 19:18:14.874310  421288 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:18:14.899038  421288 status.go:176] ha-255909-m04 status: &{Name:ha-255909-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (19.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (9.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-255909 node start m02 --alsologtostderr -v 5: (8.205908976s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (9.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (107.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 stop --alsologtostderr -v 5
E1027 19:19:01.062391  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-255909 stop --alsologtostderr -v 5: (48.168140864s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 start --wait true --alsologtostderr -v 5
E1027 19:19:24.210387  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/functional-051715/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:19:24.216830  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/functional-051715/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:19:24.228300  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/functional-051715/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:19:24.249914  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/functional-051715/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:19:24.291414  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/functional-051715/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:19:24.373004  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/functional-051715/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:19:24.534609  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/functional-051715/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:19:24.856373  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/functional-051715/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:19:25.498280  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/functional-051715/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:19:26.779659  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/functional-051715/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:19:29.342007  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/functional-051715/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:19:34.463976  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/functional-051715/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:19:44.705644  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/functional-051715/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:20:05.187060  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/functional-051715/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-255909 start --wait true --alsologtostderr -v 5: (59.420494993s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (107.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-255909 node delete m03 --alsologtostderr -v 5: (9.849103809s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 status --alsologtostderr -v 5
E1027 19:20:24.129099  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (43.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 stop --alsologtostderr -v 5
E1027 19:20:46.149316  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/functional-051715/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-255909 stop --alsologtostderr -v 5: (43.145704493s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-255909 status --alsologtostderr -v 5: exit status 7 (128.200104ms)

                                                
                                                
-- stdout --
	ha-255909
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-255909-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-255909-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:21:08.322014  435415 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:21:08.322338  435415 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:21:08.322350  435415 out.go:374] Setting ErrFile to fd 2...
	I1027 19:21:08.322357  435415 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:21:08.322587  435415 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 19:21:08.322837  435415 out.go:368] Setting JSON to false
	I1027 19:21:08.322881  435415 mustload.go:65] Loading cluster: ha-255909
	I1027 19:21:08.322986  435415 notify.go:220] Checking for updates...
	I1027 19:21:08.323367  435415 config.go:182] Loaded profile config "ha-255909": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:21:08.323392  435415 status.go:174] checking status of ha-255909 ...
	I1027 19:21:08.323875  435415 cli_runner.go:164] Run: docker container inspect ha-255909 --format={{.State.Status}}
	I1027 19:21:08.344540  435415 status.go:371] ha-255909 host status = "Stopped" (err=<nil>)
	I1027 19:21:08.344564  435415 status.go:384] host is not running, skipping remaining checks
	I1027 19:21:08.344570  435415 status.go:176] ha-255909 status: &{Name:ha-255909 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 19:21:08.344598  435415 status.go:174] checking status of ha-255909-m02 ...
	I1027 19:21:08.344850  435415 cli_runner.go:164] Run: docker container inspect ha-255909-m02 --format={{.State.Status}}
	I1027 19:21:08.363624  435415 status.go:371] ha-255909-m02 host status = "Stopped" (err=<nil>)
	I1027 19:21:08.363668  435415 status.go:384] host is not running, skipping remaining checks
	I1027 19:21:08.363676  435415 status.go:176] ha-255909-m02 status: &{Name:ha-255909-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 19:21:08.363704  435415 status.go:174] checking status of ha-255909-m04 ...
	I1027 19:21:08.363981  435415 cli_runner.go:164] Run: docker container inspect ha-255909-m04 --format={{.State.Status}}
	I1027 19:21:08.383228  435415 status.go:371] ha-255909-m04 host status = "Stopped" (err=<nil>)
	I1027 19:21:08.383299  435415 status.go:384] host is not running, skipping remaining checks
	I1027 19:21:08.383311  435415 status.go:176] ha-255909-m04 status: &{Name:ha-255909-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (43.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (52.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-255909 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (51.613124162s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (52.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (44.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 node add --control-plane --alsologtostderr -v 5
E1027 19:22:08.071633  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/functional-051715/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-255909 node add --control-plane --alsologtostderr -v 5: (43.217804593s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-255909 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (44.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.96s)

                                                
                                    
x
+
TestJSONOutput/start/Command (40.15s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-895727 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-895727 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (40.149051982s)
--- PASS: TestJSONOutput/start/Command (40.15s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.12s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-895727 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-895727 --output=json --user=testUser: (6.123412873s)
--- PASS: TestJSONOutput/stop/Command (6.12s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.26s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-676396 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-676396 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (87.314917ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f31ae6be-019f-4fcf-88fe-cffae56f7558","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-676396] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"741efa0a-3bee-4be6-8487-3484a396a784","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21801"}}
	{"specversion":"1.0","id":"e26b0196-e228-421c-aebe-9f280ce1e7a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"2c623639-c61f-4d66-a10a-86ece43362d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21801-352833/kubeconfig"}}
	{"specversion":"1.0","id":"39ecfd26-92f1-4740-b506-0ac7186f628c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-352833/.minikube"}}
	{"specversion":"1.0","id":"83a76a4e-b1c0-457c-a39f-c8fde876c16f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"00e17f41-ca2b-4d22-81ef-f11643cdba4b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4ebd968d-ebf1-4d99-9e00-9e2d824942b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-676396" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-676396
--- PASS: TestErrorJSONOutput (0.26s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (27.9s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-892467 --network=
E1027 19:24:01.068367  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-892467 --network=: (25.65564694s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-892467" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-892467
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-892467: (2.220626118s)
--- PASS: TestKicCustomNetwork/create_custom_network (27.90s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.49s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-960834 --network=bridge
E1027 19:24:24.211293  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/functional-051715/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-960834 --network=bridge: (23.412947528s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-960834" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-960834
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-960834: (2.050708002s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.49s)

                                                
                                    
x
+
TestKicExistingNetwork (24.11s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1027 19:24:44.444427  356415 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1027 19:24:44.462404  356415 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1027 19:24:44.462504  356415 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1027 19:24:44.462526  356415 cli_runner.go:164] Run: docker network inspect existing-network
W1027 19:24:44.480305  356415 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1027 19:24:44.480341  356415 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1027 19:24:44.480365  356415 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1027 19:24:44.480571  356415 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1027 19:24:44.499570  356415 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-04e197bde7e8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:6a:8c:cb:7c:68:31} reservation:<nil>}
I1027 19:24:44.499936  356415 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00078d5a0}
I1027 19:24:44.499959  356415 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1027 19:24:44.500001  356415 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1027 19:24:44.559944  356415 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-971711 --network=existing-network
E1027 19:24:51.913303  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/functional-051715/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-971711 --network=existing-network: (21.91826408s)
helpers_test.go:175: Cleaning up "existing-network-971711" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-971711
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-971711: (2.036921129s)
I1027 19:25:08.534538  356415 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.11s)

                                                
                                    
x
+
TestKicCustomSubnet (25.12s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-453009 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-453009 --subnet=192.168.60.0/24: (22.820352943s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-453009 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-453009" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-453009
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-453009: (2.272833216s)
--- PASS: TestKicCustomSubnet (25.12s)

                                                
                                    
x
+
TestKicStaticIP (26.53s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-379213 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-379213 --static-ip=192.168.200.200: (24.131686105s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-379213 ip
helpers_test.go:175: Cleaning up "static-ip-379213" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-379213
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-379213: (2.241250483s)
--- PASS: TestKicStaticIP (26.53s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (51.89s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-166872 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-166872 --driver=docker  --container-runtime=crio: (21.513356465s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-169534 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-169534 --driver=docker  --container-runtime=crio: (24.147256429s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-166872
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-169534
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-169534" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-169534
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-169534: (2.477498745s)
helpers_test.go:175: Cleaning up "first-166872" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-166872
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-166872: (2.426538971s)
--- PASS: TestMinikubeProfile (51.89s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.67s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-358306 --memory=3072 --mount-string /tmp/TestMountStartserial1488736273/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-358306 --memory=3072 --mount-string /tmp/TestMountStartserial1488736273/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.671602055s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.67s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-358306 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.43s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-382575 --memory=3072 --mount-string /tmp/TestMountStartserial1488736273/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-382575 --memory=3072 --mount-string /tmp/TestMountStartserial1488736273/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.434171138s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-382575 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.76s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-358306 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-358306 --alsologtostderr -v=5: (1.755831225s)
--- PASS: TestMountStart/serial/DeleteFirst (1.76s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-382575 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-382575
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-382575: (1.274136975s)
--- PASS: TestMountStart/serial/Stop (1.27s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.51s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-382575
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-382575: (6.512118762s)
--- PASS: TestMountStart/serial/RestartStopped (7.51s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-382575 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (62.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-429433 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-429433 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m2.245740429s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (62.76s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-429433 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-429433 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-429433 -- rollout status deployment/busybox: (2.159837706s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-429433 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-429433 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-429433 -- exec busybox-7b57f96db7-fz7bx -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-429433 -- exec busybox-7b57f96db7-n7m78 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-429433 -- exec busybox-7b57f96db7-fz7bx -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-429433 -- exec busybox-7b57f96db7-n7m78 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-429433 -- exec busybox-7b57f96db7-fz7bx -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-429433 -- exec busybox-7b57f96db7-n7m78 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.68s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-429433 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-429433 -- exec busybox-7b57f96db7-fz7bx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-429433 -- exec busybox-7b57f96db7-fz7bx -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-429433 -- exec busybox-7b57f96db7-n7m78 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-429433 -- exec busybox-7b57f96db7-n7m78 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (26.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-429433 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-429433 -v=5 --alsologtostderr: (26.250144455s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (26.95s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-429433 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.70s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 cp testdata/cp-test.txt multinode-429433:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 ssh -n multinode-429433 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 cp multinode-429433:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1115473306/001/cp-test_multinode-429433.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 ssh -n multinode-429433 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 cp multinode-429433:/home/docker/cp-test.txt multinode-429433-m02:/home/docker/cp-test_multinode-429433_multinode-429433-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 ssh -n multinode-429433 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 ssh -n multinode-429433-m02 "sudo cat /home/docker/cp-test_multinode-429433_multinode-429433-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 cp multinode-429433:/home/docker/cp-test.txt multinode-429433-m03:/home/docker/cp-test_multinode-429433_multinode-429433-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 ssh -n multinode-429433 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 ssh -n multinode-429433-m03 "sudo cat /home/docker/cp-test_multinode-429433_multinode-429433-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 cp testdata/cp-test.txt multinode-429433-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 ssh -n multinode-429433-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 cp multinode-429433-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1115473306/001/cp-test_multinode-429433-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 ssh -n multinode-429433-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 cp multinode-429433-m02:/home/docker/cp-test.txt multinode-429433:/home/docker/cp-test_multinode-429433-m02_multinode-429433.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 ssh -n multinode-429433-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 ssh -n multinode-429433 "sudo cat /home/docker/cp-test_multinode-429433-m02_multinode-429433.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 cp multinode-429433-m02:/home/docker/cp-test.txt multinode-429433-m03:/home/docker/cp-test_multinode-429433-m02_multinode-429433-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 ssh -n multinode-429433-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 ssh -n multinode-429433-m03 "sudo cat /home/docker/cp-test_multinode-429433-m02_multinode-429433-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 cp testdata/cp-test.txt multinode-429433-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 ssh -n multinode-429433-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 cp multinode-429433-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1115473306/001/cp-test_multinode-429433-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 ssh -n multinode-429433-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 cp multinode-429433-m03:/home/docker/cp-test.txt multinode-429433:/home/docker/cp-test_multinode-429433-m03_multinode-429433.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 ssh -n multinode-429433-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 ssh -n multinode-429433 "sudo cat /home/docker/cp-test_multinode-429433-m03_multinode-429433.txt"
E1027 19:29:01.063086  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 cp multinode-429433-m03:/home/docker/cp-test.txt multinode-429433-m02:/home/docker/cp-test_multinode-429433-m03_multinode-429433-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 ssh -n multinode-429433-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 ssh -n multinode-429433-m02 "sudo cat /home/docker/cp-test_multinode-429433-m03_multinode-429433-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.47s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-429433 node stop m03: (1.280647937s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-429433 status: exit status 7 (525.107504ms)

                                                
                                                
-- stdout --
	multinode-429433
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-429433-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-429433-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-429433 status --alsologtostderr: exit status 7 (529.950596ms)

                                                
                                                
-- stdout --
	multinode-429433
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-429433-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-429433-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:29:04.160001  494835 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:29:04.160313  494835 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:29:04.160325  494835 out.go:374] Setting ErrFile to fd 2...
	I1027 19:29:04.160332  494835 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:29:04.160565  494835 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 19:29:04.160773  494835 out.go:368] Setting JSON to false
	I1027 19:29:04.160824  494835 mustload.go:65] Loading cluster: multinode-429433
	I1027 19:29:04.160923  494835 notify.go:220] Checking for updates...
	I1027 19:29:04.161362  494835 config.go:182] Loaded profile config "multinode-429433": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:29:04.161385  494835 status.go:174] checking status of multinode-429433 ...
	I1027 19:29:04.161935  494835 cli_runner.go:164] Run: docker container inspect multinode-429433 --format={{.State.Status}}
	I1027 19:29:04.182720  494835 status.go:371] multinode-429433 host status = "Running" (err=<nil>)
	I1027 19:29:04.182755  494835 host.go:66] Checking if "multinode-429433" exists ...
	I1027 19:29:04.183034  494835 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-429433
	I1027 19:29:04.203016  494835 host.go:66] Checking if "multinode-429433" exists ...
	I1027 19:29:04.203372  494835 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 19:29:04.203423  494835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-429433
	I1027 19:29:04.222265  494835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33275 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/multinode-429433/id_rsa Username:docker}
	I1027 19:29:04.320979  494835 ssh_runner.go:195] Run: systemctl --version
	I1027 19:29:04.327500  494835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:29:04.340854  494835 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:29:04.399667  494835 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-27 19:29:04.388460082 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:29:04.400317  494835 kubeconfig.go:125] found "multinode-429433" server: "https://192.168.67.2:8443"
	I1027 19:29:04.400356  494835 api_server.go:166] Checking apiserver status ...
	I1027 19:29:04.400405  494835 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1027 19:29:04.413008  494835 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1227/cgroup
	W1027 19:29:04.422634  494835 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1227/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1027 19:29:04.422689  494835 ssh_runner.go:195] Run: ls
	I1027 19:29:04.426928  494835 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1027 19:29:04.431502  494835 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1027 19:29:04.431529  494835 status.go:463] multinode-429433 apiserver status = Running (err=<nil>)
	I1027 19:29:04.431540  494835 status.go:176] multinode-429433 status: &{Name:multinode-429433 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 19:29:04.431556  494835 status.go:174] checking status of multinode-429433-m02 ...
	I1027 19:29:04.431806  494835 cli_runner.go:164] Run: docker container inspect multinode-429433-m02 --format={{.State.Status}}
	I1027 19:29:04.451104  494835 status.go:371] multinode-429433-m02 host status = "Running" (err=<nil>)
	I1027 19:29:04.451150  494835 host.go:66] Checking if "multinode-429433-m02" exists ...
	I1027 19:29:04.451431  494835 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-429433-m02
	I1027 19:29:04.470931  494835 host.go:66] Checking if "multinode-429433-m02" exists ...
	I1027 19:29:04.471258  494835 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1027 19:29:04.471301  494835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-429433-m02
	I1027 19:29:04.491184  494835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33280 SSHKeyPath:/home/jenkins/minikube-integration/21801-352833/.minikube/machines/multinode-429433-m02/id_rsa Username:docker}
	I1027 19:29:04.591005  494835 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1027 19:29:04.605091  494835 status.go:176] multinode-429433-m02 status: &{Name:multinode-429433-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1027 19:29:04.605146  494835 status.go:174] checking status of multinode-429433-m03 ...
	I1027 19:29:04.605475  494835 cli_runner.go:164] Run: docker container inspect multinode-429433-m03 --format={{.State.Status}}
	I1027 19:29:04.625189  494835 status.go:371] multinode-429433-m03 host status = "Stopped" (err=<nil>)
	I1027 19:29:04.625214  494835 status.go:384] host is not running, skipping remaining checks
	I1027 19:29:04.625220  494835 status.go:176] multinode-429433-m03 status: &{Name:multinode-429433-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.34s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-429433 node start m03 -v=5 --alsologtostderr: (6.754838538s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.50s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (79.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-429433
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-429433
E1027 19:29:24.209249  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/functional-051715/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-429433: (29.706089094s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-429433 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-429433 --wait=true -v=5 --alsologtostderr: (49.629423389s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-429433
--- PASS: TestMultiNode/serial/RestartKeepsNodes (79.47s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-429433 node delete m03: (4.741828016s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.39s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-429433 stop: (28.394971082s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-429433 status: exit status 7 (108.475563ms)

                                                
                                                
-- stdout --
	multinode-429433
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-429433-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-429433 status --alsologtostderr: exit status 7 (106.207914ms)

                                                
                                                
-- stdout --
	multinode-429433
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-429433-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:31:05.555475  504600 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:31:05.555776  504600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:31:05.555787  504600 out.go:374] Setting ErrFile to fd 2...
	I1027 19:31:05.555791  504600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:31:05.556005  504600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 19:31:05.556222  504600 out.go:368] Setting JSON to false
	I1027 19:31:05.556260  504600 mustload.go:65] Loading cluster: multinode-429433
	I1027 19:31:05.556302  504600 notify.go:220] Checking for updates...
	I1027 19:31:05.556782  504600 config.go:182] Loaded profile config "multinode-429433": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:31:05.556808  504600 status.go:174] checking status of multinode-429433 ...
	I1027 19:31:05.557448  504600 cli_runner.go:164] Run: docker container inspect multinode-429433 --format={{.State.Status}}
	I1027 19:31:05.577382  504600 status.go:371] multinode-429433 host status = "Stopped" (err=<nil>)
	I1027 19:31:05.577414  504600 status.go:384] host is not running, skipping remaining checks
	I1027 19:31:05.577424  504600 status.go:176] multinode-429433 status: &{Name:multinode-429433 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1027 19:31:05.577472  504600 status.go:174] checking status of multinode-429433-m02 ...
	I1027 19:31:05.577755  504600 cli_runner.go:164] Run: docker container inspect multinode-429433-m02 --format={{.State.Status}}
	I1027 19:31:05.596832  504600 status.go:371] multinode-429433-m02 host status = "Stopped" (err=<nil>)
	I1027 19:31:05.596897  504600 status.go:384] host is not running, skipping remaining checks
	I1027 19:31:05.596908  504600 status.go:176] multinode-429433-m02 status: &{Name:multinode-429433-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.61s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-429433 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-429433 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (49.713536153s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-429433 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.36s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (24.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-429433
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-429433-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-429433-m02 --driver=docker  --container-runtime=crio: exit status 14 (86.048418ms)

                                                
                                                
-- stdout --
	* [multinode-429433-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21801
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21801-352833/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-352833/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-429433-m02' is duplicated with machine name 'multinode-429433-m02' in profile 'multinode-429433'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-429433-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-429433-m03 --driver=docker  --container-runtime=crio: (21.701407602s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-429433
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-429433: exit status 80 (308.523373ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-429433 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-429433-m03 already exists in multinode-429433-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-429433-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-429433-m03: (2.446575444s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (24.61s)

                                                
                                    
x
+
TestPreload (109.76s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-570776 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-570776 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (51.568441711s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-570776 image pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-570776
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-570776: (5.901221374s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-570776 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1027 19:34:01.062518  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-570776 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (48.54855284s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-570776 image list
helpers_test.go:175: Cleaning up "test-preload-570776" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-570776
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-570776: (2.508787941s)
--- PASS: TestPreload (109.76s)

                                                
                                    
x
+
TestScheduledStopUnix (97.52s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-706609 --memory=3072 --driver=docker  --container-runtime=crio
E1027 19:34:24.209300  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/functional-051715/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-706609 --memory=3072 --driver=docker  --container-runtime=crio: (21.346492887s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-706609 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-706609 -n scheduled-stop-706609
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-706609 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1027 19:34:36.408081  356415 retry.go:31] will retry after 147.458µs: open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/scheduled-stop-706609/pid: no such file or directory
I1027 19:34:36.409254  356415 retry.go:31] will retry after 75.492µs: open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/scheduled-stop-706609/pid: no such file or directory
I1027 19:34:36.410416  356415 retry.go:31] will retry after 128.08µs: open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/scheduled-stop-706609/pid: no such file or directory
I1027 19:34:36.411562  356415 retry.go:31] will retry after 214.307µs: open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/scheduled-stop-706609/pid: no such file or directory
I1027 19:34:36.412699  356415 retry.go:31] will retry after 664.898µs: open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/scheduled-stop-706609/pid: no such file or directory
I1027 19:34:36.413922  356415 retry.go:31] will retry after 895.823µs: open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/scheduled-stop-706609/pid: no such file or directory
I1027 19:34:36.415067  356415 retry.go:31] will retry after 838.464µs: open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/scheduled-stop-706609/pid: no such file or directory
I1027 19:34:36.416202  356415 retry.go:31] will retry after 2.048149ms: open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/scheduled-stop-706609/pid: no such file or directory
I1027 19:34:36.418373  356415 retry.go:31] will retry after 1.98999ms: open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/scheduled-stop-706609/pid: no such file or directory
I1027 19:34:36.420600  356415 retry.go:31] will retry after 2.230135ms: open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/scheduled-stop-706609/pid: no such file or directory
I1027 19:34:36.423807  356415 retry.go:31] will retry after 7.732677ms: open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/scheduled-stop-706609/pid: no such file or directory
I1027 19:34:36.432064  356415 retry.go:31] will retry after 5.130261ms: open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/scheduled-stop-706609/pid: no such file or directory
I1027 19:34:36.438360  356415 retry.go:31] will retry after 9.305429ms: open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/scheduled-stop-706609/pid: no such file or directory
I1027 19:34:36.448633  356415 retry.go:31] will retry after 22.933036ms: open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/scheduled-stop-706609/pid: no such file or directory
I1027 19:34:36.471975  356415 retry.go:31] will retry after 40.629372ms: open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/scheduled-stop-706609/pid: no such file or directory
I1027 19:34:36.513259  356415 retry.go:31] will retry after 43.352407ms: open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/scheduled-stop-706609/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-706609 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-706609 -n scheduled-stop-706609
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-706609
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-706609 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1027 19:35:47.277530  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/functional-051715/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-706609
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-706609: exit status 7 (89.783771ms)

                                                
                                                
-- stdout --
	scheduled-stop-706609
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-706609 -n scheduled-stop-706609
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-706609 -n scheduled-stop-706609: exit status 7 (87.125193ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-706609" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-706609
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-706609: (4.592632482s)
--- PASS: TestScheduledStopUnix (97.52s)

                                                
                                    
x
+
TestInsufficientStorage (10.08s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-321540 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-321540 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.46660793s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b2affa32-3c9e-41ff-bc27-cc6944336ba6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-321540] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2afd82c5-c775-4f9c-a2dc-bfd02aa9213a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21801"}}
	{"specversion":"1.0","id":"c1a5fa2a-09fc-4d12-93e6-2fe798c96608","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"458e0497-bd34-4443-abac-0bf10bb3c384","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21801-352833/kubeconfig"}}
	{"specversion":"1.0","id":"d6ee7999-6de7-4390-8034-007d74b6c5f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-352833/.minikube"}}
	{"specversion":"1.0","id":"b49d2e24-e23b-4be0-aa63-93dc64c8a94a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"89317ab2-e489-484b-9824-c4e5a4c2944a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"912e6b75-dfd8-4b0a-8963-92eaaaa6f844","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"bf53aa3c-a75a-4864-8b06-32dcb58ca4b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"1ba9e50c-93ac-45bf-8ab7-e16845761c16","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"de151ed4-a925-4cfc-95e1-d496eebd270d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"6209ad5d-9368-4708-9a3b-107c0f96659f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-321540\" primary control-plane node in \"insufficient-storage-321540\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e3ced589-e256-4559-8a65-a8c9f4085aa8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760939008-21773 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"7b640a7f-8771-462e-bed3-57661b31d3d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"777b6f29-d118-46f7-b35f-d11de85d7d86","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-321540 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-321540 --output=json --layout=cluster: exit status 7 (314.228692ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-321540","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-321540","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1027 19:35:59.885199  524868 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-321540" does not appear in /home/jenkins/minikube-integration/21801-352833/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-321540 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-321540 --output=json --layout=cluster: exit status 7 (313.583474ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-321540","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-321540","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1027 19:36:00.198553  524975 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-321540" does not appear in /home/jenkins/minikube-integration/21801-352833/kubeconfig
	E1027 19:36:00.209935  524975 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/insufficient-storage-321540/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-321540" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-321540
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-321540: (1.984284639s)
--- PASS: TestInsufficientStorage (10.08s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (50.87s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.577145056 start -p running-upgrade-477416 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.577145056 start -p running-upgrade-477416 --memory=3072 --vm-driver=docker  --container-runtime=crio: (25.462309124s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-477416 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-477416 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.633864361s)
helpers_test.go:175: Cleaning up "running-upgrade-477416" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-477416
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-477416: (2.131605711s)
--- PASS: TestRunningBinaryUpgrade (50.87s)

                                                
                                    
x
+
TestKubernetesUpgrade (301.19s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-360986 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-360986 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.690299607s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-360986
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-360986: (1.356114271s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-360986 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-360986 status --format={{.Host}}: exit status 7 (99.573422ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-360986 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-360986 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m27.588330243s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-360986 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-360986 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-360986 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (118.184316ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-360986] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21801
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21801-352833/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-352833/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-360986
	    minikube start -p kubernetes-upgrade-360986 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3609862 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-360986 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-360986 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-360986 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.451845273s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-360986" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-360986
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-360986: (2.807724531s)
--- PASS: TestKubernetesUpgrade (301.19s)

                                                
                                    
x
+
TestMissingContainerUpgrade (104.01s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.622814725 start -p missing-upgrade-345161 --memory=3072 --driver=docker  --container-runtime=crio
E1027 19:37:04.131432  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.622814725 start -p missing-upgrade-345161 --memory=3072 --driver=docker  --container-runtime=crio: (45.057926913s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-345161
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-345161: (10.461301393s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-345161
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-345161 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-345161 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (42.710478596s)
helpers_test.go:175: Cleaning up "missing-upgrade-345161" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-345161
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-345161: (5.03283693s)
--- PASS: TestMissingContainerUpgrade (104.01s)

                                                
                                    
x
+
TestPause/serial/Start (54.53s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-249140 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-249140 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (54.528909168s)
--- PASS: TestPause/serial/Start (54.53s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.62s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-249140 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-249140 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (7.6030108s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.62s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.58s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.58s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (46.02s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3758708620 start -p stopped-upgrade-423310 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3758708620 start -p stopped-upgrade-423310 --memory=3072 --vm-driver=docker  --container-runtime=crio: (29.230404223s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3758708620 -p stopped-upgrade-423310 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3758708620 -p stopped-upgrade-423310 stop: (1.980567418s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-423310 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-423310 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (14.808808581s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (46.02s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-423310
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-423310: (1.05766237s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-668991 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-668991 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (87.87362ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-668991] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21801
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21801-352833/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-352833/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (30.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-668991 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-668991 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (30.134616183s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-668991 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (30.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (20.73s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-668991 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-668991 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (16.660260088s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-668991 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-668991 status -o json: exit status 2 (342.97829ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-668991","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-668991
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-668991: (3.72213014s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (20.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-387383 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-387383 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (176.793886ms)

                                                
                                                
-- stdout --
	* [false-387383] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21801
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21801-352833/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-352833/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1027 19:38:47.561987  569711 out.go:360] Setting OutFile to fd 1 ...
	I1027 19:38:47.562252  569711 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:38:47.562262  569711 out.go:374] Setting ErrFile to fd 2...
	I1027 19:38:47.562266  569711 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1027 19:38:47.562514  569711 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21801-352833/.minikube/bin
	I1027 19:38:47.563021  569711 out.go:368] Setting JSON to false
	I1027 19:38:47.564279  569711 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":8477,"bootTime":1761585451,"procs":343,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1027 19:38:47.564387  569711 start.go:141] virtualization: kvm guest
	I1027 19:38:47.566398  569711 out.go:179] * [false-387383] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1027 19:38:47.567919  569711 out.go:179]   - MINIKUBE_LOCATION=21801
	I1027 19:38:47.567930  569711 notify.go:220] Checking for updates...
	I1027 19:38:47.570920  569711 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1027 19:38:47.572299  569711 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21801-352833/kubeconfig
	I1027 19:38:47.573663  569711 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21801-352833/.minikube
	I1027 19:38:47.575206  569711 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1027 19:38:47.576468  569711 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1027 19:38:47.578626  569711 config.go:182] Loaded profile config "NoKubernetes-668991": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1027 19:38:47.578721  569711 config.go:182] Loaded profile config "cert-expiration-368442": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:38:47.578802  569711 config.go:182] Loaded profile config "kubernetes-upgrade-360986": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1027 19:38:47.578898  569711 driver.go:421] Setting default libvirt URI to qemu:///system
	I1027 19:38:47.604843  569711 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1027 19:38:47.605027  569711 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1027 19:38:47.665521  569711 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-27 19:38:47.654361213 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1027 19:38:47.665638  569711 docker.go:318] overlay module found
	I1027 19:38:47.667458  569711 out.go:179] * Using the docker driver based on user configuration
	I1027 19:38:47.668688  569711 start.go:305] selected driver: docker
	I1027 19:38:47.668704  569711 start.go:925] validating driver "docker" against <nil>
	I1027 19:38:47.668716  569711 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1027 19:38:47.670693  569711 out.go:203] 
	W1027 19:38:47.672003  569711 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1027 19:38:47.673087  569711 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-387383 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-387383

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-387383

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-387383

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-387383

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-387383

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-387383

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-387383

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-387383

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-387383

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-387383

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387383"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387383"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387383"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-387383

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387383"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387383"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-387383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-387383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-387383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-387383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-387383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-387383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-387383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-387383" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387383"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387383"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387383"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387383"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387383"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-387383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-387383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-387383" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387383"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387383"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387383"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387383"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387383"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Oct 2025 19:38:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: NoKubernetes-668991
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Oct 2025 19:37:03 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-368442
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Oct 2025 19:38:43 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: kubernetes-upgrade-360986
contexts:
- context:
cluster: NoKubernetes-668991
extensions:
- extension:
last-update: Mon, 27 Oct 2025 19:38:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-668991
name: NoKubernetes-668991
- context:
cluster: cert-expiration-368442
extensions:
- extension:
last-update: Mon, 27 Oct 2025 19:37:03 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-368442
name: cert-expiration-368442
- context:
cluster: kubernetes-upgrade-360986
user: kubernetes-upgrade-360986
name: kubernetes-upgrade-360986
current-context: kubernetes-upgrade-360986
kind: Config
users:
- name: NoKubernetes-668991
user:
client-certificate: /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/NoKubernetes-668991/client.crt
client-key: /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/NoKubernetes-668991/client.key
- name: cert-expiration-368442
user:
client-certificate: /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/cert-expiration-368442/client.crt
client-key: /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/cert-expiration-368442/client.key
- name: kubernetes-upgrade-360986
user:
client-certificate: /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kubernetes-upgrade-360986/client.crt
client-key: /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kubernetes-upgrade-360986/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-387383

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387383"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387383"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387383"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387383"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387383"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387383"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387383"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387383"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387383"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387383"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387383"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387383"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387383"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387383"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387383"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387383"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387383"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-387383"

                                                
                                                
----------------------- debugLogs end: false-387383 [took: 3.357612971s] --------------------------------
helpers_test.go:175: Cleaning up "false-387383" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-387383
--- PASS: TestNetworkPlugins/group/false (3.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (49.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-468959 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-468959 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (49.283992969s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (49.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (4.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-668991 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1027 19:39:01.063077  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-668991 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (4.749661631s)
--- PASS: TestNoKubernetes/serial/Start (4.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-668991 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-668991 "sudo systemctl is-active --quiet service kubelet": exit status 1 (314.981562ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (34.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
E1027 19:39:24.209456  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/functional-051715/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-amd64 profile list: (20.024086158s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:181: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (14.602891939s)
--- PASS: TestNoKubernetes/serial/ProfileList (34.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-668991
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-668991: (1.285469632s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-668991 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-668991 --driver=docker  --container-runtime=crio: (6.65982187s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-468959 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [46c055c9-34d3-4bb1-9d46-10ffe110ed16] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [46c055c9-34d3-4bb1-9d46-10ffe110ed16] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003392987s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-468959 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-668991 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-668991 "sudo systemctl is-active --quiet service kubelet": exit status 1 (301.313878ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (43.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-919237 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-919237 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (43.075442577s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (43.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-468959 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-468959 --alsologtostderr -v=3: (16.083026106s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-468959 -n old-k8s-version-468959
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-468959 -n old-k8s-version-468959: exit status 7 (100.956428ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-468959 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (48.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-468959 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-468959 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (47.756484396s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-468959 -n old-k8s-version-468959
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (48.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (51.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-095885 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-095885 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (51.234443147s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (51.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-919237 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [ec9e6b8d-f937-4aee-b9b9-0131d28f83a9] Pending
helpers_test.go:352: "busybox" [ec9e6b8d-f937-4aee-b9b9-0131d28f83a9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [ec9e6b8d-f937-4aee-b9b9-0131d28f83a9] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004478396s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-919237 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (16.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-919237 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-919237 --alsologtostderr -v=3: (16.540045336s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (16.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-919237 -n embed-certs-919237
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-919237 -n embed-certs-919237: exit status 7 (104.332026ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-919237 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (46.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-919237 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-919237 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (45.773739197s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-919237 -n embed-certs-919237
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (46.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-mb5fm" [aa553d39-b345-4aaa-badc-a7f124972284] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004114077s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-095885 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [0b9552df-1e78-4109-bc0e-2632454d1b25] Pending
helpers_test.go:352: "busybox" [0b9552df-1e78-4109-bc0e-2632454d1b25] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [0b9552df-1e78-4109-bc0e-2632454d1b25] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.005450018s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-095885 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-mb5fm" [aa553d39-b345-4aaa-badc-a7f124972284] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004959127s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-468959 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-468959 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (17.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-095885 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-095885 --alsologtostderr -v=3: (17.073771159s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (17.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (39.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-813397 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-813397 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (39.941455884s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (39.94s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-095885 -n no-preload-095885
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-095885 -n no-preload-095885: exit status 7 (104.479128ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-095885 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (49.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-095885 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-095885 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (48.978873803s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-095885 -n no-preload-095885
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (49.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-sctm4" [0e119012-f38a-4f38-9de4-5f165abf4edf] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004017832s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-sctm4" [0e119012-f38a-4f38-9de4-5f165abf4edf] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003623507s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-919237 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-919237 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-813397 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9332b27c-18d7-4f62-aa20-359e62f7d9b4] Pending
helpers_test.go:352: "busybox" [9332b27c-18d7-4f62-aa20-359e62f7d9b4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9332b27c-18d7-4f62-aa20-359e62f7d9b4] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.00426175s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-813397 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (27.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-677710 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-677710 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (27.163385899s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (27.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (18.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-813397 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-813397 --alsologtostderr -v=3: (18.122358078s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (18.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dqcbh" [0f07f163-c30e-4605-a6fa-68364ac4eff8] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004333116s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dqcbh" [0f07f163-c30e-4605-a6fa-68364ac4eff8] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005657445s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-095885 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-813397 -n default-k8s-diff-port-813397
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-813397 -n default-k8s-diff-port-813397: exit status 7 (91.128993ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-813397 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-813397 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-813397 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (50.914536795s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-813397 -n default-k8s-diff-port-813397
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-095885 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-677710 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-677710 --alsologtostderr -v=3: (8.118600976s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (75.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-387383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-387383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m15.307832016s)
--- PASS: TestNetworkPlugins/group/auto/Start (75.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-677710 -n newest-cni-677710
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-677710 -n newest-cni-677710: exit status 7 (90.839087ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-677710 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (13.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-677710 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-677710 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (13.363501876s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-677710 -n newest-cni-677710
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (13.78s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-677710 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (75.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-387383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-387383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m15.117934748s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (75.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (52.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-387383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-387383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (52.518849894s)
--- PASS: TestNetworkPlugins/group/calico/Start (52.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gllsf" [460f77f5-a4eb-4992-a7b0-1413ca2d33c1] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003737599s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gllsf" [460f77f5-a4eb-4992-a7b0-1413ca2d33c1] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003993559s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-813397 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-813397 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (53.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-387383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-387383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (53.146136699s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (53.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-387383 "pgrep -a kubelet"
I1027 19:43:59.532028  356415 config.go:182] Loaded profile config "auto-387383": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-387383 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xnf6m" [21848c88-10ae-4db7-85a4-780ecb960a4e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1027 19:44:01.062794  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/addons-589824/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-xnf6m" [21848c88-10ae-4db7-85a4-780ecb960a4e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.00531683s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-pk5rf" [d858b139-2305-4467-8fee-efd7aa5fc8a0] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-pk5rf" [d858b139-2305-4467-8fee-efd7aa5fc8a0] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006202694s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-387383 "pgrep -a kubelet"
I1027 19:44:07.870309  356415 config.go:182] Loaded profile config "calico-387383": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (8.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-387383 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2qg6j" [daf48d3e-c670-403e-b240-27c5b5dda745] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2qg6j" [daf48d3e-c670-403e-b240-27c5b5dda745] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 8.004220483s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (8.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-387383 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-387383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-387383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-387383 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-387383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-387383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-flh2b" [89313924-3d6a-4600-9a4d-99def677cf86] Running
E1027 19:44:24.209067  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/functional-051715/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004542434s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-387383 "pgrep -a kubelet"
I1027 19:44:30.002498  356415 config.go:182] Loaded profile config "kindnet-387383": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-387383 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fqx6q" [b7f54e07-5447-48f1-a8a9-56b398d6b961] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fqx6q" [b7f54e07-5447-48f1-a8a9-56b398d6b961] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003142608s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (76.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-387383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-387383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m16.805282634s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (76.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (47.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-387383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-387383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (47.970058776s)
--- PASS: TestNetworkPlugins/group/flannel/Start (47.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-387383 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-387383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-387383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-387383 "pgrep -a kubelet"
I1027 19:44:40.708917  356415 config.go:182] Loaded profile config "custom-flannel-387383": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-387383 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7nc68" [20eef9d8-ce32-4afe-a058-ef8fceca6843] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1027 19:44:44.563036  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/old-k8s-version-468959/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:44:44.569566  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/old-k8s-version-468959/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:44:44.581258  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/old-k8s-version-468959/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:44:44.602860  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/old-k8s-version-468959/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:44:44.644330  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/old-k8s-version-468959/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:44:44.725965  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/old-k8s-version-468959/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:44:44.887727  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/old-k8s-version-468959/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:44:45.209621  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/old-k8s-version-468959/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:44:45.851558  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/old-k8s-version-468959/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-7nc68" [20eef9d8-ce32-4afe-a058-ef8fceca6843] Running
E1027 19:44:47.133103  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/old-k8s-version-468959/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1027 19:44:49.694722  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/old-k8s-version-468959/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004391262s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-387383 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-387383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-387383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (37.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-387383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1027 19:45:05.058629  356415 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/old-k8s-version-468959/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-387383 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (37.112811112s)
--- PASS: TestNetworkPlugins/group/bridge/Start (37.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-nmh6j" [9f016dcc-a26a-438e-91f2-85dffab03bb7] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004010687s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-387383 "pgrep -a kubelet"
I1027 19:45:33.223591  356415 config.go:182] Loaded profile config "flannel-387383": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-387383 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-shmjb" [6c78e3ac-2342-4ed7-a20a-ce2b485195fa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-shmjb" [6c78e3ac-2342-4ed7-a20a-ce2b485195fa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.004642696s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-387383 "pgrep -a kubelet"
I1027 19:45:38.107232  356415 config.go:182] Loaded profile config "bridge-387383": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-387383 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-l55wv" [f0b42348-7469-43e8-a23a-6602685728dd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-l55wv" [f0b42348-7469-43e8-a23a-6602685728dd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003865271s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-387383 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-387383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-387383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-387383 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-387383 exec deployment/netcat -- nslookup kubernetes.default
I1027 19:45:47.378869  356415 config.go:182] Loaded profile config "enable-default-cni-387383": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-387383 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-qwfv7" [5dae3cc5-5084-4bdd-b0a4-2df154d24af5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-qwfv7" [5dae3cc5-5084-4bdd-b0a4-2df154d24af5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004466609s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-387383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-387383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-387383 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-387383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-387383 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    

Test skip (27/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:34: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-926399" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-926399
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-387383 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-387383

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-387383

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-387383

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-387383

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-387383

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-387383

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-387383

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-387383

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-387383

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-387383

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387383"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387383"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387383"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-387383

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387383"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387383"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-387383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-387383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-387383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-387383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-387383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-387383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-387383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-387383" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387383"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387383"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387383"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387383"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387383"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-387383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-387383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-387383" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387383"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387383"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387383"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387383"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387383"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Oct 2025 19:38:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: NoKubernetes-668991
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Oct 2025 19:37:03 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-368442
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Oct 2025 19:38:43 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: kubernetes-upgrade-360986
contexts:
- context:
cluster: NoKubernetes-668991
extensions:
- extension:
last-update: Mon, 27 Oct 2025 19:38:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-668991
name: NoKubernetes-668991
- context:
cluster: cert-expiration-368442
extensions:
- extension:
last-update: Mon, 27 Oct 2025 19:37:03 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-368442
name: cert-expiration-368442
- context:
cluster: kubernetes-upgrade-360986
user: kubernetes-upgrade-360986
name: kubernetes-upgrade-360986
current-context: kubernetes-upgrade-360986
kind: Config
users:
- name: NoKubernetes-668991
user:
client-certificate: /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/NoKubernetes-668991/client.crt
client-key: /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/NoKubernetes-668991/client.key
- name: cert-expiration-368442
user:
client-certificate: /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/cert-expiration-368442/client.crt
client-key: /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/cert-expiration-368442/client.key
- name: kubernetes-upgrade-360986
user:
client-certificate: /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kubernetes-upgrade-360986/client.crt
client-key: /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kubernetes-upgrade-360986/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-387383

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387383"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387383"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387383"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387383"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387383"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387383"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387383"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387383"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387383"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387383"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387383"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387383"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387383"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387383"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387383"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387383"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387383"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-387383"

                                                
                                                
----------------------- debugLogs end: kubenet-387383 [took: 3.613858236s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-387383" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-387383
--- SKIP: TestNetworkPlugins/group/kubenet (3.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-387383 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-387383

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-387383

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-387383

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-387383

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-387383

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-387383

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-387383

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-387383

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-387383

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-387383

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387383"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387383"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387383"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-387383

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387383"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387383"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-387383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-387383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-387383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-387383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-387383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-387383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-387383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-387383" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387383"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387383"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387383"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387383"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387383"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-387383

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-387383

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-387383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-387383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-387383

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-387383

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-387383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-387383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-387383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-387383" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-387383" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387383"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387383"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387383"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387383"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387383"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Oct 2025 19:38:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: NoKubernetes-668991
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Oct 2025 19:37:03 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-368442
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21801-352833/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 27 Oct 2025 19:38:43 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: kubernetes-upgrade-360986
contexts:
- context:
cluster: NoKubernetes-668991
extensions:
- extension:
last-update: Mon, 27 Oct 2025 19:38:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-668991
name: NoKubernetes-668991
- context:
cluster: cert-expiration-368442
extensions:
- extension:
last-update: Mon, 27 Oct 2025 19:37:03 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-368442
name: cert-expiration-368442
- context:
cluster: kubernetes-upgrade-360986
user: kubernetes-upgrade-360986
name: kubernetes-upgrade-360986
current-context: kubernetes-upgrade-360986
kind: Config
users:
- name: NoKubernetes-668991
user:
client-certificate: /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/NoKubernetes-668991/client.crt
client-key: /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/NoKubernetes-668991/client.key
- name: cert-expiration-368442
user:
client-certificate: /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/cert-expiration-368442/client.crt
client-key: /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/cert-expiration-368442/client.key
- name: kubernetes-upgrade-360986
user:
client-certificate: /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kubernetes-upgrade-360986/client.crt
client-key: /home/jenkins/minikube-integration/21801-352833/.minikube/profiles/kubernetes-upgrade-360986/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-387383

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387383"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387383"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387383"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387383"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387383"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387383"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387383"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387383"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387383"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387383"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387383"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387383"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387383"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387383"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387383"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387383"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387383"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-387383" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-387383"

                                                
                                                
----------------------- debugLogs end: cilium-387383 [took: 3.754309104s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-387383" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-387383
--- SKIP: TestNetworkPlugins/group/cilium (3.94s)

                                                
                                    
Copied to clipboard